Category Archives: Programming

Procedural Mesh Generation from Live Audio in Unreal Engine 4

Source: https://github.com/Tsarpf/UE4-Procedural-Audio

Here are two example videos of it running live where at the beginning I just press play in Foobar on Windows, and the visualization begins. Spotify, Youtube, or any other audio source on Windows works straight away as well.

A word of warning though, it’s only a proof of concept, so it is not stable! In-editor it generally works well, but sometimes crashes. The program also does not release all the memory that it should. Due to a yet unknown reason the standalone version basically doesn’t work at all.

Thanks to other open source projects

For the mesh generation part I got a lot of help from SiggiG’s procedural UE4 project/tutorial which lives here: https://github.com/SiggiG/ProceduralMeshes

Some of the code for figuring out frequencies from audio chunks is from eXi’s sound visualization plugin, especially the original CalculateFrequencySpectrum function that can be found here, and his use of the library KissFFT.

Because I’m proud that I was able to figure out the frequency calculating part myself as well, I want to add that I have my own implementation for it (with the help of the library “ffft”), but for this project I overwrote that with eXi’s solution to rule out bugs in that area.

A very brief and dense overview on how it works

On Windows, we’re in luck because we’ve already done the audio capture part, now we just direct it towards UE4 instead of a file. An audio sink for the audio capturer receives chunks of audio frames from the capturer, and the audio listener itself is ran in it’s own thread within the visualizer process. Now that chunks of audio are popping into a queue from which we can dequeue them in the UE4 main thread, we can calculate the sound spectrums for each audio chunk we receive. Finally on each game tick we fetch a list of new frequencies, and if found, add those to the mesh, and move the camera forward to keep up.

Feel free to ask me on Twitter or in the comments if something is unclear!

Making it work on platforms other than Windows

On Linux capturing the audio should be very easy for example by directing arecord ‘s standard output to the UE4 program’s standard input and going forward from there, but haven’t gotten around to trying that yet. On OSX I would start searching for a solution with the help of the project “Soundflower”, but I’m not sure how easy that will be.

Thanks!

Maybe the proof of concept gives someone an idea for something awesome. Please make a new Audiosurf that takes in live audio and doesn’t need to process the whole song from a file first. Or make the sound waves collideable and get a some sort of game mechanic out of that?

tsatter.com

tsatter.com is what resulted from me wanting to make a website, and take it far enough to call it finished. I learned a lot along the way. Here’s a short description of the whole system. I’ll write more specific posts and link them here when I find the time and interest. Requests for speeding up a specific part are of course welcome.

  • Dockerized environment for easy development and deployment
    • Provided you have docker installed, it takes executing one command to have the whole 5 part infrastructure set up for you.
  • NGINX
    • efficiently shares static files
  • NodeJS
  • IRC server
    • handles actually delivering messages between users in channels, and keeps track of channel’s ops etc.
    • provides a tried and tested protocol for bots to interface with.
  • AngularJS front-end
    • infinite scrolling front page
    • infinite scrolling messages
      • clicking the timestamp gives a shareable url. Opening the URL will take the opener to that message, and highlight it.
      • no need to load the whole history at once
      • handles massive amounts of lines without slowing the computer much
    • infinite scrolling images
      • images have a link to the message in which they were linked
    • inline image search & linking (by pressing @ in a chat)
    • kicking/oping/unoping users in a channel

It used to have registering and logging in as well at a time, but I wanted to make starting using it as easy and low effort as possible. But now I’m thinking maybe sending those post registration spam emails would help with user retention and actually getting users back to it after they’ve tried it once.

Programmatically capture audio playing on Windows

ss (2015-12-28 at 10.32.59)

I wanted to do real-time audio visualization and didn’t want to fight with music streaming service libraries more than once (I’m looking at you, LibSpotify), so I thought I’d go with the most general solution — get the audio straight from the OS.

This post is written as a kind of information dump I would have wanted to read when I started figuring this all out.

I had wondered why isn’t there any software like virtual audio cable that would also provide programmatic access to what’s running through the virtual device. So I took a look at how to write my own, and apparently it’s really time consuming and difficult. Not going to start there, then.

Anyway, it turns out in Windows there’s something called WASAPI that provides a solution: “loopback recording”

In loopback mode, a client of WASAPI can capture the audio stream that is being played by a rendering endpoint device.

And there’s an almost-ready-to-use example for it! Although it was a bit weird goto-heavy let-us-put-almost-everything-in-the-same-function kind of thing.

In the code example in Capturing a Stream, the RecordAudioStream function can be easily modified to configure a loopback-mode capture stream. The required modifications are:

I wasted a lot of time trying to understand what the format of the data I was being delivered by default was, and how to change the format to PCM, but it turns out the beans are spilled right here.

Basically you fill a WAVEFORMATEX struct to describe the format, or modify the struct as it is returned from a call to IAudioClient::GetMixFormat that “retrieves the stream format that the audio engine uses for its internal processing of shared-mode streams.”

By the way, often changing to a format that uses the same sample rate (f.ex 44.1khz) and channel count (2 for stereo) can be provided straight away by WASAPI so you don’t have to do any actual conversion yourself.

Here’s how my system’s current configuration’s (in hindsight it would be a better idea to just fill the struct…) format could be changed to 16 bit PCM:

pwfx->wBitsPerSample = 16;
pwfx->nBlockAlign = 4;
pwfx->wFormatTag = WAVE_FORMAT_PCM;
pwfx->nAvgBytesPerSec = pwfx->nSamplesPerSec * pwfx->nBlockAlign;
pwfx->cbSize = 0;

IAudioClient::IsFormatSupported can be used to check if the type of audio you’d want to use will work without having to call initialize and seeing if it fails.

One more thing, if you’re not familiar with COM code, before calling IAudioClient::Initialize you have to initialize COM, which meant just calling CoInitialize(nullptr) once somewhere before initializing the audio client.

In the code I wrote to try all this out, I just wrote the captured data to a file which I then imported to Audacity to check for correctness.

Note that the number from IAudioCaptureClient::GetBuffer describing the amount of data we got out of it is in frames. This means to get the byte (or char) count that ostream::write for example needs we need to do something like this:

int bytesPerSample = m_bitsPerSample / 8;
unsigned int byteCount = numFramesAvailable * bytesPerSample * m_nChannels;

Anyway, here’s my example implementation you can check out if you get stuck with something https://github.com/Tsarpf/windows-default-playback-device-to-pcm-file

Hope it’s of use to someone.

3D infinite terrain generation in JavaScript using Marching Cubes and PlayCanvas

landscape

Landscape kind of thing

Arrow keys move, mouse looks. Try it out here

3516f2f284

Alien floating things and holes in the ‘ground’

Arrow keys move, mouse looks. Try this one out here.

I’ve made a presentation or “slides” about the same subject, and you can check it out in video form below. It’s rather slow paced because it’s supposed to have a talk accompanied with it, but the visualizations might be helpful.

Let’s get technical — what goes into building something like this?

The main components of the system are:

  1. A coherent noise function from which we can sample a 3D field (array) of density values
  2. Marching Cubes algorithm for creating surfaces at the boundaries of a chosen density value
  3. Shading the resulting geometry
  4. Loading (and unloading) the infinite map in a bunch of small chunks
  5. Balancing the loading (and unloading) over multiple frames to have a smoother fps

The project’s full source can be found here: https://github.com/Tsarpf/proced

Let’s go through the components with a little bit more detail

Noise functions

Noise functions in this context are basically functions that take in coordinates in some number of dimensions, and give out a single floating point value. So a 3 dimensional noise function could look something like this:

function(x,y,z) {
   ...
   return a;
}

Now, if the return values of our noise function were completely random, say, the function’s results were from a call to Math.rand, if we tried to use those values as a base for our terrain generation, we would end up with something that resembles a post modern art piece, not rolling hills or mountains.

What we need is a coherent noise function. With coherent noise functions, there’s a promise that when the input values vary by a little bit, the output values vary only by a little bit as well. This means there are gradual transitions between spaces where there are solid things and where there are not.

Surfaces from noise

At least at the moment, I use the marching cubes algorithm for creating surfaces out of noise. The algorithm has been explained way better than I can in so many places already that I’m going to just encourage you to check the wikipedia page and Paul Bourke’s Polygonising a scalar field to start learning more about it.

If you think there’s still use for yet an another explanation of it, please leave a comment or ping me at twitter, and I might do a separate post about it.

Also feel free to copy and/or use my JS port of Paul Bourke’s code.

Shading the surfaces

In order for the geometry we’ve generated to look 3D, it needs shading. This means we need surface normals. There are two different approaches I know of for finding them.

  1. A gradient for a point in a field is the direction where the field’s values change the most from the current position. When surfaces are generated to go along a density level boundary, they are perpendicular to the gradients. This means the gradients are actually the same thing as normals. We can approximate a gradient for a point by sampling the density values surrounding it.

Gradient2

Picture from the Wikipedia article about gradients. The blue arrows show the direction of the color gradient.


here’s some sample code:

function getNormalForVertex(x, y, z, sampler, outObj) {
	outObj.x = sampler(x + dataStep.x, y, z) - sampler(x - dataStep.x, y , z);
	outObj.y = sampler(x, y + dataStep.y, z) - sampler(x, y - dataStep.y , z);
	outObj.z = sampler(x, y, z + dataStep.z) - sampler(x, y , z - dataStep.z);
}

dataStep.x/y/z needs to be some relatively small number. If it’s too big, the normal (gradient) approximation is too coarse and is wrong for small surface details. If it’s too small, it takes into accord too small changes and big surfaces can get weird normals.

I was never able to get good looking normals for both big and small features this way so I abandoned it and tried another way:

  1. For each triangle, get the triangle’s normal by taking the cross-product of two vectors formed from the triangle’s vertices. These normals alone provide some basic shading, and the method is often called `face normals`. Since using only face normals we would have sudden changes of color at triangle borders, we want to do a bit better than that. So for each vertex, we combine it with other vertices that have the same position, and get an average normal from all the triangles it belongs to where each triangle normal is weighted by the triangle’s size.

This gets us some pretty decent looking shading. IIRC it was something like 10% faster, too, but nothing drastic.

One bug or problem I haven’t yet had the time to solve with shading is that the vertices are only combined within chunks (discussed below), so at chunk borders the shading still isn’t smooth.

Splitting the world into smaller pieces for loading them separately

These pieces are often called “chunks” in infinite map generators. By loading the world in chunks we don’t have to load everything at once, and can do tricks like prioritizing the loading of chunks closer to the player over farther away ones. Chunks also enable us to even out the CPU stress caused by loading new chunks over as many frames as we’d like.

My approach for the data structure to save these chunks to was to write a 3D circular buffer. This means old and far away chunks eventually get overwritten. You can find my implementation here and some mocha tests for it here

Further, my circular buffer, or wrapping array as I call it, uses “zones” which tell how many chunks away a given bunch of chunks is from the player, from 0 to WorldChunkCountPerAxis / 2 (since the player is set to be at the center)

ugly-array-pic

An ugly 1D representation of the zone system

The zones are implemented as “zone functions” which can be set to call a specific function for every chunk that enters a given zone every time the players chunk position changes.

At the moment it also has a different function for the case where the chunks enter a zone caused by the player coming closer to them, and a different one for when the chunk enters the zone by “falling behind” or going farther away from the player. This separation could be used to unload a chunk or decrease LOD when the zone is entered backwards and load or increase LOD when entered forwards, but I’m probably removing this feature soon and instead going to use the same function for all chunks in the zone, and handling all unloading etc. when an array index gets overwritten instead.

Here’s an example of setting up a zone function:

var size = 7; // world chunk count per axis
var zoneCount = Math.ceil(size / 2); // how many zones in the world
var wrappingArray = PROCED.wrappingArray(size); // initialize wrapping array

wrappingArray.setZoneFunction(zoneCount - 1, function (arrayCell, worldCoords) {
    queue.push({
        type: 'draw',
        arrayCell: arrayCell,
        worldCoords: worldCoords
    }, 1); //priority 1 for all chunk drawings for now
}, function() {});

The first argument of setZoneFunction is the zone (represented by a single integer) we want the following functions to be used for. At the moment while the forward/backward separation is still there, the forwards function is the second argument and the third is the backwards function (which is an empty function in this case)

Now every time one of the six direction functions (x/y/z axis, +/- direction) of the wrapping array is called (f.ex triggered by the player moving in one of the directions over a chunk border), it calls the function for every chunk that entered the zone at the edge of the world (which is the zone zoneCount - 1) “forwards”

In my current implementation it pushes to a work queue a “draw” order for each of these chunks that newly entered the zone, where the chunk to be drawn should be placed in the cell arrayCell in our wrapped array (which is a 3D array flattened to 1D, see here), and the chunk drawn there should have the non-wrapped aka “world coordinates” specified by worldCoords.

Balancing loading

I use Async.js‘s priority queue implementation to queue the processing of chunks. I use something like 1-2 workers for the queue, which means it takes 1 or 2 chunks from the queue per frame. When the processing orders come come out of the queue, the processing is delayed once more by asking for free CPU time time by calling requestAnimationFrame. I’m not sure if it’s the best way to do it, but it seems at least to be faster than using setTimeout with zero timeout. Please leave a comment if you know more about this 🙂

I also haven’t yet had the time to investigate multicore processing using web workers, there definitely could be some performance gains to be found there.

Conclusion

A lot could be improved in the project at its current state, and it doesn’t even have textures or collision yet, but I hope this helps someone in getting started with building something procedural for the web!

I might write a follow-up post on this someday if I’ve come up with major improvements.

Thanks for reading!

A short as possible FizzBuzz JS oneliner

After a few iterations this is what I came up with:


Array.apply(null, {length: 100}).map((_, x) => {x++; return x % 15 ? x % 5 ? x % 3 ? x : 'Fizz' : 'Buzz' : 'FizzBuzz'})

  • Initialize the array with 0-99 on the same line we use it. Map’s callback’s first argument is in this case always undefined so it’s not needed (the variable name is an underscore, if it doesn’t show), the second one is the element’s index.
Array.apply(null, {length: 100}).map((_, x)
  • x has to be incremented to stay true to the 1-100 FizzBuzz and for the truthyness trick below to work
x++;
  • By nesting the ternary operations like this we can skip the == 0 checks because > 0 remainders are truthy
x % 15 ? x % 5 ? x % 3 ? x : 'Fizz' : 'Buzz' : 'FizzBuzz'

It fits in a tweet!

Download, minify & resize images on a different processor core in Node.JS

We’ll explore how to use cluster (from the standard library), GraphicsMagick, and streams to efficiently process images without slowing down or blocking the Node.JS event loop

Skip to the end if you just want a link to the full source!

Background and motivation

When I first showed tsatter.com to the world it slowed some people’s computers to a crawl and some really big images even crashed people’s browsers. Huh. Apparently showing a bunch of original (file) sized, user submitted gifs on the front page isn’t a good idea. Who would’ve thought.

Scaling up the server CPU enough to run both Node.JS and the image processing smoothly on the same processor core would need a ridiculously powerful processor. Actually I’m not sure if such a thing even exists, especially when there starts to be enough traffic for at least one image to be in the middle of processing all the time. Besides, multicore processors are everywhere nowadays.

With having at least one core dedicated entirely to the processing of images, the image processor core can slow down all it wants without it affecting the server in practically any way. This also has the benefit of completely separating the image processor code from everything else, so replacing the worker with something that runs on a completely different machine altogether would actually be an almost trivially easy thing to do.

At the time of writing, the client-side code is still unfinished, but all the relevant parts for this blog post are done, so I thought I’d take a break from coding and write this thing out of the way. Hopefully by the time you’re reading this the front-end is ready as well and the whole feature set deployed live.

Challenges we’ll face

Efficiency

Node.JS handles everything in a single thread by default and processing images is processor heavy and time consuming. If we process the images in the same thread that Node.JS uses for communicating with clients, every time an image is being processed it will slow down how fast the server processes requests. It might even stop responding for a while. This is unacceptable if we want the website to appear snappy from the user’s point of view. You will just lose the user if the first time page load takes 3 seconds.

Thankfully, Node.JS standard library has a thing called cluster that makes setting up a worker thread and IPC surprisingly easy.

File types and sizes

We don’t want to waste any more time than what we absolutely have to with URLs that don’t point to images. We also cannot just download the whole file and then check it’s size or type. What if it’s multiple gigabytes?

Blindly trusting the content-length headers from the server’s response is not a good idea either, the header could be intentionally or unintentionally wrong.

Luckily streams to the rescue.

Downloading

So what we want to achieve in this chapter is

  • Find out the file type as soon as possible
  • Make sure we don’t download images that are too big
    Like I said earlier, we shouldn’t trust content-length headers alone for the size information. But that doesn’t mean we can’t use them at all. I think the best usage for them is for discarding some URLs before we even start a download.

By the way here’s the stackoverflow answer where I got the download size stuff from. I then added file type checking.

So let’s check the headers with a HEAD request using the always useful request library. I promise we’ll get to the really interesting stuff soon.

var download = function(url, callback) {
    var stream = request({
        url: url,
        method: 'HEAD'
    }, function(err, headRes) {
        if(err) {
            console.log(err);
            return callback(err);
        }
        var size = headRes.headers['content-length'];
        if (size > maxSize) {
            console.log('Resource size exceeds limit (' + size + ')');
            return callback('image too big');
        }
...

Note that we haven’t started saving it to a file yet so no abort or unlink is necessary at this point.

As some of you might’ve guessed, I’m using the Node.JS callback style here, where the callback’s first argument is the error argument, which contains the error when there is one, and null when no error occurred.

We’ve decided to download, what’s next?

We should start keeping count of how much have we downloaded, and try to deduce the file type.

Deducing the file type is actually pretty easy using magic numbers. We just get a bunch of file type signature magic numbers from for example here and for the first few bytes of the stream, we look for the magic numbers. If a match is found, we make a note of the file type and continue downloading. Otherwise we quit and remove the few bytes we’ve already downloaded.

var fileTypes = {
    'png': '89504e47',
    'jpg': 'ffd8ffe0',
    'gif': '47494638'
};
size = 0; //declared in the previous code block

//Generate a random 10 character string
var filename = getName();
var filepath = imagesPath + filename;

//Open up a file stream for writing
var file = fs.createWriteStream(filepath);
var res = request({ url: url});
var checkType = true;
var type = '';

res.on('data', function(data) {

    //Keep track of how much we've downloaded
    size += data.length;

    if(checkType && size >= 4) {
//Wow. WordPress syntax highlighting really breaks badly here.
        var hex = data.toString('hex' , 0, 4);
        for(var key in fileTypes) {
            if(fileTypes.hasOwnProperty(key)) {
                if(hex.indexOf(fileTypes[key]) === 0) {
                    type = key;
                    checkType = false;
                    break;
                }
            }
        }
        if(!type) {
            //If the type didn't match any of the file types we're looking for,
            //abort the download and remove target file
            res.abort();
            fs.unlink(filepath);
            return callback('not an image');
        }
    }

    if (size > maxSize) {
        console.log('Resource stream exceeded limit (' + size + ')');
        res.abort(); // Abort the response (close and cleanup the stream)
        fs.unlink(filepath); // Delete the file we were downloading the data to

        //imageTooBig contains a path to a placeholder image for bigger images.
        //Also set shouldProcess to false, we don't want to process the placeholder
        //image later on
        return callback(null, {path: imageTooBig, shouldProcess: false});
    }
}).pipe(file); //Pipe request's stream's output to a file.

//When download has finished, call the callback.
res.on('end', function() {
    callback(null, {filename: filename, shouldProcess: true, type: type});
})

I encourage you to read the comments for better info on what each line does. If something is still unclear, feel free to ask in the comments section at the end of the article.

File downloaded, let’s process it

The minifying function is pretty straightforward. As to how I came up with it, I googled up the most common ways to reduce file size for all three image types (png, gif, jpg). Most of the results were about ImageMagick so I looked up the GraphicsMagick equivalents, since it’s supposed to be faster in most operations.

For gifs I decided to just grab the first frame (hence the + ‘[0]’ for path), since at tsatter.com I will be setting up a system where mouseovering a gif starts playing the original one.

I also decided to resize the images to 500x500px, but if you don’t want that, you can just remove the .resize(…) line from each case. By the way, the ‘>’ at the end of the resize line means that it won’t resize the image if it’s already smaller than the wanted size.

var thumbnailDimensions = {
    width: 500,
    height: 500
};
var minifyImage = function(obj, callback) {
    //The downloaded original file was saved without extension
    //Here we save the new processed file with the extension.
    var origPath = imagesPath + obj.filename;
    var path = origPath + '.' + obj.type;
    var filename = obj.filename + '.' + obj.type;
    switch(obj.type) {
        case 'jpg':
            gm(origPath)
                .interlace('Plane')
                .quality(85)
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .noProfile()
                .write(path, function(err) {
                    if(err) {
                        console.log('err');
                        console.log(err);
                    }
                    else {
                        callback(filename);
                    }
                });
            break;
        case 'png':
            gm(origPath)
                .colors(256)
                .quality(90)
                .bitdepth(8)
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .noProfile()
                .write(path, function(err) {
                    if(err) {
                        console.log('err');
                        console.log(err);
                    }
                    else {
                        callback(filename);
                    }
                });
            break;
        case 'gif':
            gm(origPath + '[0]')
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .noProfile()
                .write(path, function(err) {
                    if(err) {
                        console.log('err');
                        console.log(err);
                    }
                    else {
                        callback(filename);
                    }
                });
            break;
    }
};

The result? Even without the resize the file size usually drops by over 50% without too much decrease in quality. This could still be improved a lot, so if you happen to know more about this, please let me and others know in the comments below!

You could also use all other features of the gm library here. For example I’ve been thinking about fusing a barely noticeable and see through shape that looks like a “play” button straight into the gif frame for tsatter.com. Or you could add your website’s watermark. Or maybe programmatically generate comics out of a bunch of images? I don’t know.

Putting it all together

Alright, for the last part we’re going to set up the functions for transmitting messages between the main thread and the worker thread. Luckily in Node.JS and cluster this is really easy. Both threads have a function for sending messages to the other, and both threads can set up a function that receives messages from the other. We send and receive normal JavaScript objects. I promised it was going to be easy!

Worker thread

Let’s introduce the final piece of code that belongs in the worker thread side of the system. Remember, this last excerpt and all the ones before it belong to their own file that is dedicated only for the worker process.

process.on('message', function(msg) {
    download(msg.url, function(err, obj) {
        if(err) {
            return;
        }
        var resObj = {
            src: msg.url,
            type: obj.type
        };
        if(obj.shouldProcess === true) {
            minifyImage(obj, function(filepath) {
                resObj.thumbnail = filepath;
                process.send(resObj);
            })
        }
        else {
            //Use the downloaded file as the thumbnail in this case
            resObj.thumbnail = obj.filename;
            process.send(resObj);
        }
    });
});
  • process.on(…) sets up a new listener function for messages from the main thread.
  • process.send(object) sends messages to the main thread.

Real simple.

Main thread

The code in this section will go to the file or files that you run directly using node (or what is required or otherwise included in those files)

We set up a reference to the worker process and set what file do we run as the worker process.

var cluster = require('cluster');
cluster.setupMaster({
     exec: __dirname + '/imageProcessWorker.js'
});

Next, we command cluster.fork() to spawn a new worker process. For this article we only really need one worker, but if you have more than two cores you could spawn more of them and set up a way to decide which worker needs more work, but this is outside the scope of this post.

Also, we set up a message handler that receives objects from the worker thread. As an example, in the message handler at tsatter.com I save the file paths I receive from the worker thread to a database and inform clients about a new media delivery being ready.

var worker = cluster.fork();

worker.on('message', messageHandler);

function messageHandler (msg) {
//Do something with the information from worker thread.
}

And as the very last code excerpt, processUrl is the function that I call from my actual server code. worker.send(…) is then used to send an object to the worker thread for processing

var processUrl = function(url) {
    worker.send({
        url: url
    });
};

Conclusion– A short summary of the program flow

  1. processUrl is called with a URL. processUrl sends it to the worker thread using worker.send(…)
  2. URL is received at the worker thread at process.on(‘message’, …)
  3. Downloading is considered and possibly attempted at the download function.
  • if it fails or we just don’t want to download it at all, we just stop all processing and never notify the main thread of anything.
  1. If the download succeeded and the shouldProcess variable is set to true, the minifyImage(…) function is called
  • if the image doesn’t need processing, we skip the rest of the steps and just send it to the main thread using process.send(…)
  1. After minifying is done we send the results to the main thread using process.send(…)
  2. Results are received in the messageHandler function, and something is hopefully done with them!

The End

Here are the promised links to the full source:

That should be it. Hopefully it’s of some use to someone. Thanks for reading!

My first demo! My first rant! PlayCanvas is really good!

Okay demo only in the sense it’s not interactive. Not very long either. Perhaps a little childish. Whatever, check it out here and it’s source here. It should run on any modern browser, although on mobile you’ll want to rotate to landscape.

PlayCanvas is cool. It’s like Unity3D except the editor and the end results live in the browser. To my great surprise it’s really good, too. Importing assets is easy, scripting is easy, the scripting documentation is excellent, stuff like creating custom events and programmatically changing materials works logically. A couple of this.entity.model.model.meshInstances style object chains looked a bit awkward, but it was okay with the good docs.

The weirdest thing I encountered was to find a rather new example (from 2014, that’s new, right?), where a function the example used had been deprecated so long ago it didn’t give a warning anymore and just failed and googling the function’s name yielded an another function which had also already been deprecated. At least this one still had a deprecation warning which mentioned the new function’s name. Never encountered a double deprecation before. It even sounds really dirty.

Anyway, the basic workflow I used was to upload my awesome self made .fbx model to the editor and on the code side I set up a Github repository and linked/synced it with the project. A cool thing about the editor is that it’s able to run the project “locally” which means you can serve your scripts from localhost and edit them locally, and still use the web-based editor. Neat. And when the project is finished you can just download a .zip with all the project files ready to be served somewhere with a single unzip. There was some sort of publish option, too, which I didn’t try yet which I’m assuming needs less manual file moving.

PlayCanvas finally made me install Blender. Before today I always sticked to a weird idea(l) where I tried to keep to just coding. Half of it was about hoping someone would someday start doing the modelling part of my (our?) graphics and games projects, the other half was thinking I’ll never be good at anything that has something to do with drawing. Now that I’m learning to draw, too, there’s really no excuse…

Blender’s documentation seems to suck a bit. I ended up at a page a couple of times where instead of seeing the doc I saw a number titled “file count” or something? I also met a useful “hotkey reference” which was a small info box with the text “Hotkey: see below”. Below it was a massive wall of text. I tried ctrl+f:ing ‘hotkey’ and there was one result, which was:

note however that most Blender hotkeys you know in Edit mode do not exist for texts!

t-thanks for that. I never found the hotkey. All the other doc pages I found were mostly just huge lists of hotkeys with really weird descriptions for them.

The only real way to find info was to watch tutorial videos. And who the hell has time for watching tutorial videos? You can’t really skim them, that sucks. No one remembers everything on the first watch/listen/read which is why you need to be able to refer back quickly, preferably through a bookmark or something (okay no one uses bookmarks anymore, maybe a history or another google search). Luckily the videos were rather good so it wasn’t as big a pain as it could have been.

Blender’s UI definitely sucks. There’s often little to no indication what mode you’re currently in and what’s it called so you could at least google how to get back to it when you forget how. I at some point lost the wireframe mode and never found it again. Sometimes things like the overlay that draws a grid and coordinate axes disappears, and the recommended and easiest way to get it back was to open the default project, and then reopen your project with a checkbox unchecked that makes it not load the UI settings from your project. Ugh.

But even with the somewhat crappy UI Blender did leave me with the impression that it’s really powerful when you do know the shortcuts. I heard some schools here in Finland use something like a year just for teaching students how to use it. Makes sense.

Next time I play with PlayCanvas I think I’ll try out something procedural.