Programmatically capture audio playing on Windows

ss (2015-12-28 at 10.32.59)

I wanted to do real-time audio visualization and didn’t want to fight with music streaming service libraries more than once (I’m looking at you, LibSpotify), so I thought I’d go with the most general solution — get the audio straight from the OS.

This post is written as a kind of information dump I would have wanted to read when I started figuring this all out.

I had wondered why isn’t there any software like virtual audio cable that would also provide programmatic access to what’s running through the virtual device. So I took a look at how to write my own, and apparently it’s really time consuming and difficult. Not going to start there, then.

Anyway, it turns out in Windows there’s something called WASAPI that provides a solution: “loopback recording”

In loopback mode, a client of WASAPI can capture the audio stream that is being played by a rendering endpoint device.

And there’s an almost-ready-to-use example for it! Although it was a bit weird goto-heavy let-us-put-almost-everything-in-the-same-function kind of thing.

In the code example in Capturing a Stream, the RecordAudioStream function can be easily modified to configure a loopback-mode capture stream. The required modifications are:

I wasted a lot of time trying to understand what the format of the data I was being delivered by default was, and how to change the format to PCM, but it turns out the beans are spilled right here.

Basically you fill a WAVEFORMATEX struct to describe the format, or modify the struct as it is returned from a call to IAudioClient::GetMixFormat that “retrieves the stream format that the audio engine uses for its internal processing of shared-mode streams.”

By the way, often changing to a format that uses the same sample rate (f.ex 44.1khz) and channel count (2 for stereo) can be provided straight away by WASAPI so you don’t have to do any actual conversion yourself.

Here’s how my system’s current configuration’s (in hindsight it would be a better idea to just fill the struct…) format could be changed to 16 bit PCM:

pwfx->wBitsPerSample = 16;
pwfx->nBlockAlign = 4;
pwfx->wFormatTag = WAVE_FORMAT_PCM;
pwfx->nAvgBytesPerSec = pwfx->nSamplesPerSec * pwfx->nBlockAlign;
pwfx->cbSize = 0;

IAudioClient::IsFormatSupported can be used to check if the type of audio you’d want to use will work without having to call initialize and seeing if it fails.

One more thing, if you’re not familiar with COM code, before calling IAudioClient::Initialize you have to initialize COM, which meant just calling CoInitialize(nullptr) once somewhere before initializing the audio client.

In the code I wrote to try all this out, I just wrote the captured data to a file which I then imported to Audacity to check for correctness.

Note that the number from IAudioCaptureClient::GetBuffer describing the amount of data we got out of it is in frames. This means to get the byte (or char) count that ostream::write for example needs we need to do something like this:

int bytesPerSample = m_bitsPerSample / 8;
unsigned int byteCount = numFramesAvailable * bytesPerSample * m_nChannels;

Anyway, here’s my example implementation you can check out if you get stuck with something

Hope it’s of use to someone.

3D infinite terrain generation in JavaScript using Marching Cubes and PlayCanvas


Landscape kind of thing

Arrow keys move, mouse looks. Try it out here


Alien floating things and holes in the ‘ground’

Arrow keys move, mouse looks. Try this one out here.

I’ve made a presentation or “slides” about the same subject, and you can check it out in video form below. It’s rather slow paced because it’s supposed to have a talk accompanied with it, but the visualizations might be helpful.

Let’s get technical — what goes into building something like this?

The main components of the system are:

  1. A coherent noise function from which we can sample a 3D field (array) of density values
  2. Marching Cubes algorithm for creating surfaces at the boundaries of a chosen density value
  3. Shading the resulting geometry
  4. Loading (and unloading) the infinite map in a bunch of small chunks
  5. Balancing the loading (and unloading) over multiple frames to have a smoother fps

The project’s full source can be found here:

Let’s go through the components with a little bit more detail

Noise functions

Noise functions in this context are basically functions that take in coordinates in some number of dimensions, and give out a single floating point value. So a 3 dimensional noise function could look something like this:

function(x,y,z) {
   return a;

Now, if the return values of our noise function were completely random, say, the function’s results were from a call to Math.rand, if we tried to use those values as a base for our terrain generation, we would end up with something that resembles a post modern art piece, not rolling hills or mountains.

What we need is a coherent noise function. With coherent noise functions, there’s a promise that when the input values vary by a little bit, the output values vary only by a little bit as well. This means there are gradual transitions between spaces where there are solid things and where there are not.

Surfaces from noise

At least at the moment, I use the marching cubes algorithm for creating surfaces out of noise. The algorithm has been explained way better than I can in so many places already that I’m going to just encourage you to check the wikipedia page and Paul Bourke’s Polygonising a scalar field to start learning more about it.

If you think there’s still use for yet an another explanation of it, please leave a comment or ping me at twitter, and I might do a separate post about it.

Also feel free to copy and/or use my JS port of Paul Bourke’s code.

Shading the surfaces

In order for the geometry we’ve generated to look 3D, it needs shading. This means we need surface normals. There are two different approaches I know of for finding them.

  1. A gradient for a point in a field is the direction where the field’s values change the most from the current position. When surfaces are generated to go along a density level boundary, they are perpendicular to the gradients. This means the gradients are actually the same thing as normals. We can approximate a gradient for a point by sampling the density values surrounding it.


Picture from the Wikipedia article about gradients. The blue arrows show the direction of the color gradient.

here’s some sample code:

function getNormalForVertex(x, y, z, sampler, outObj) {
	outObj.x = sampler(x + dataStep.x, y, z) - sampler(x - dataStep.x, y , z);
	outObj.y = sampler(x, y + dataStep.y, z) - sampler(x, y - dataStep.y , z);
	outObj.z = sampler(x, y, z + dataStep.z) - sampler(x, y , z - dataStep.z);

dataStep.x/y/z needs to be some relatively small number. If it’s too big, the normal (gradient) approximation is too coarse and is wrong for small surface details. If it’s too small, it takes into accord too small changes and big surfaces can get weird normals.

I was never able to get good looking normals for both big and small features this way so I abandoned it and tried another way:

  1. For each triangle, get the triangle’s normal by taking the cross-product of two vectors formed from the triangle’s vertices. These normals alone provide some basic shading, and the method is often called `face normals`. Since using only face normals we would have sudden changes of color at triangle borders, we want to do a bit better than that. So for each vertex, we combine it with other vertices that have the same position, and get an average normal from all the triangles it belongs to where each triangle normal is weighted by the triangle’s size.

This gets us some pretty decent looking shading. IIRC it was something like 10% faster, too, but nothing drastic.

One bug or problem I haven’t yet had the time to solve with shading is that the vertices are only combined within chunks (discussed below), so at chunk borders the shading still isn’t smooth.

Splitting the world into smaller pieces for loading them separately

These pieces are often called “chunks” in infinite map generators. By loading the world in chunks we don’t have to load everything at once, and can do tricks like prioritizing the loading of chunks closer to the player over farther away ones. Chunks also enable us to even out the CPU stress caused by loading new chunks over as many frames as we’d like.

My approach for the data structure to save these chunks to was to write a 3D circular buffer. This means old and far away chunks eventually get overwritten. You can find my implementation here and some mocha tests for it here

Further, my circular buffer, or wrapping array as I call it, uses “zones” which tell how many chunks away a given bunch of chunks is from the player, from 0 to WorldChunkCountPerAxis / 2 (since the player is set to be at the center)


An ugly 1D representation of the zone system

The zones are implemented as “zone functions” which can be set to call a specific function for every chunk that enters a given zone every time the players chunk position changes.

At the moment it also has a different function for the case where the chunks enter a zone caused by the player coming closer to them, and a different one for when the chunk enters the zone by “falling behind” or going farther away from the player. This separation could be used to unload a chunk or decrease LOD when the zone is entered backwards and load or increase LOD when entered forwards, but I’m probably removing this feature soon and instead going to use the same function for all chunks in the zone, and handling all unloading etc. when an array index gets overwritten instead.

Here’s an example of setting up a zone function:

var size = 7; // world chunk count per axis
var zoneCount = Math.ceil(size / 2); // how many zones in the world
var wrappingArray = PROCED.wrappingArray(size); // initialize wrapping array

wrappingArray.setZoneFunction(zoneCount - 1, function (arrayCell, worldCoords) {
        type: 'draw',
        arrayCell: arrayCell,
        worldCoords: worldCoords
    }, 1); //priority 1 for all chunk drawings for now
}, function() {});

The first argument of setZoneFunction is the zone (represented by a single integer) we want the following functions to be used for. At the moment while the forward/backward separation is still there, the forwards function is the second argument and the third is the backwards function (which is an empty function in this case)

Now every time one of the six direction functions (x/y/z axis, +/- direction) of the wrapping array is called (f.ex triggered by the player moving in one of the directions over a chunk border), it calls the function for every chunk that entered the zone at the edge of the world (which is the zone zoneCount - 1) “forwards”

In my current implementation it pushes to a work queue a “draw” order for each of these chunks that newly entered the zone, where the chunk to be drawn should be placed in the cell arrayCell in our wrapped array (which is a 3D array flattened to 1D, see here), and the chunk drawn there should have the non-wrapped aka “world coordinates” specified by worldCoords.

Balancing loading

I use Async.js‘s priority queue implementation to queue the processing of chunks. I use something like 1-2 workers for the queue, which means it takes 1 or 2 chunks from the queue per frame. When the processing orders come come out of the queue, the processing is delayed once more by asking for free CPU time time by calling requestAnimationFrame. I’m not sure if it’s the best way to do it, but it seems at least to be faster than using setTimeout with zero timeout. Please leave a comment if you know more about this 🙂

I also haven’t yet had the time to investigate multicore processing using web workers, there definitely could be some performance gains to be found there.


A lot could be improved in the project at its current state, and it doesn’t even have textures or collision yet, but I hope this helps someone in getting started with building something procedural for the web!

I might write a follow-up post on this someday if I’ve come up with major improvements.

Thanks for reading!

Home delivery is the best. Sushi is alright


Proof I’ve ordered sushi home!


When me and my girlfriend visited Turkey we finally learned to like sushi. I’ve never had as good sushi as we had there.

But to the main topic of this post, home delivery!

I’ve cursed the fact for a long time that even though I live in the most populous area of Helsinki (Kallio / Sörnäinen), I couldn’t get practically anything else than pizza and kebab delivered. There I was, really hungry, willing to pay a bit more money than feels sensible just to get something delivered, and no one is willing to take my money. It felt like it was the stone age. There was one Chinese restaurant that delivered, but it always took really long, once something like two and a half hours, which makes it not really an option when you’re already hungry when ordering.

I was wondering maybe it was because it was just too difficult to make the deliveries profitable, or maybe for some weird unknown reason restaurants weren’t okay with their food being delivered? I really don’t know.

But enter Wolt! In something like a month it changed from not being able to order almost anything to being able to order everything from sushi to local hipster foods, even some real burgers were available. I don’t even know about any other food! 😀

Typing this I ended up poking around the mobile application to see if they still deliver at 9PM, and ordering was so easy I half-accidentally ordered a sandwich from Subway. It cost like 15eur which is almost double the price of one 30cm sandwich normally. But now I have more time for typing this. It might not be worth it every day, but at least the option is there.

This is the future! I love it.

A short as possible FizzBuzz JS oneliner

After a few iterations this is what I came up with:

Array.apply(null, {length: 100}).map((_, x) => {x++; return x % 15 ? x % 5 ? x % 3 ? x : 'Fizz' : 'Buzz' : 'FizzBuzz'})

  • Initialize the array with 0-99 on the same line we use it. Map’s callback’s first argument is in this case always undefined so it’s not needed (the variable name is an underscore, if it doesn’t show), the second one is the element’s index.
Array.apply(null, {length: 100}).map((_, x)
  • x has to be incremented to stay true to the 1-100 FizzBuzz and for the truthyness trick below to work
  • By nesting the ternary operations like this we can skip the == 0 checks because > 0 remainders are truthy
x % 15 ? x % 5 ? x % 3 ? x : 'Fizz' : 'Buzz' : 'FizzBuzz'

It fits in a tweet!

Virtual Reality will not flop — traditional games might not work well, but you’re missing the point

My DK2

My DK2

What I’m responding to with this post are articles like this:

They go on and on about people wanting to just relax on their sofa and play games without turning their heads. I think it’s funny they seem to hate VR for some reason, but only have the easiest arguments to counter. There are some difficult issues to be solved, and they’re mostly related to performance and resolution, but let’s come back to that later and start by destroying the easy one out of the way.


The comfortability battle of using a TV versus using a VR headset is pretty quickly solved in favor of VR by noting that with a headset, you can just lay in your bed or on a sofa in any position you’d like without having to try to keep your eyes towards the TV and in the same orientation as the TV. So it’s kind of like having the TV do the moving for you, not you moving for the TV. To combat the nausea from having a still image taped to your face, I’m imagining an animation where after you stop turning your head when you’ve found a good new position the virtual screen in your watching application will grow legs and walk and reposition itself at the direction you’re looking at now.

This fact alone already enables you to ditch the awkward holy trinity of a TV, sofa, and a coffee table, and clear some space for the other nice stuff in your apartment (like more sofas). With the goggles you can set up a makeshift digital entertainment observing station anywhere you’ve placed something soft. Alternate between standing and standing on your head for all I care, this time the picture won’t be upside down half the time.

As for the weight of the goggles, they’re pretty light and well-balanced. Also luckily your head is made of bone so you can strap the headset pretty darn tight before it feels uncomfortable. This makes it feel like a part of your head instead of something hanging from your face. And if your head is already heavy, just try faceplanting into your bed once in a while.

Completely new ways to work

So far we’ve mostly talked about comfortability when doing passive things like watching TV series etc. Let’s tackle some other applications. As someone who lives and breathes programming, that’s where I could use new groundbreaking stuff the most. So let’s try to think of something where VR could help programming.

As we know, as a programmer you usually have a bunch of browser windows and an editor open. What do you call that setup for a single project? Let’s call it a workspace. Now let’s draw inspiration from the old physical counterpart of a workspace. Let’s imagine you had a mansion in which you have a separate room for every project where tools and materials are stored and can be left where you last used them. Let’s bring that to VR.

Make the corner of your living room where you have that nice big comfy armchair next to the bar the place where you configure your project because fighting with that custom DSL simply sucks. You can just leave the browser window with the page open where that obscure program’s weird switches are documented floating to the left and the config file to the right. Configure the s*** out of that and when you’re done, walk to your sofa. Looks like this is where you worked on the server backend and were a bit messy plus you started to just watch Netflix at some point. Let’s throw the Netflix back next to your bed where it belongs and continue working. I could just keep going. but I think you got the point.

If you’re not a programmer, think about sculpting. Think about planning a city or a home where you can actually walk inside the model with things in their correct proportions.

We do have to step on the brakes for a while though, there are some technical challenges that need to be overcome before this is a reality. The good news is: we are not far. From playing with my DK2 I think the biggest problem is that text isn’t really readable at current resolutions, which for the DK2 was 1920×1080 split for two eyes. There already are 6″ 4k displays. It’s just a matter of time before they reach headsets. The first mobile phones with 1080p resolution reached the market sometime around 2012 and DK2 was released in 2014.

Games and interactive entertainment

Graphical performance, especially for games is an issue. For this we have an old saying here in Finland among computer enthusiasts:


(okay it could be an international thing as well, I haven’t visited demoparties outside Finland)

For those not in the know, it’s what people in the demoscene shout randomly at demoparties because they like Amiga and they’re drunk.

What I’m getting at, is we’ve dealed with low pixel counts, slow performance, lack of bandwidth, too small memory, etc. problems for a while already. And the only thing it took to get past those was time and effort. Only this time around we’ve already done it once and it’ll take less time. Also this time we’ll have the resolution, we just won’t have the performance to put anything complex into those pixels. So we can just do simple stuff while we wait for the performance to get where it needs to be, it already looks good though when you style it enough. Minecraft was my favorite VR experience on the DK2 by the way.

By the way, for example AMD has made estimations on what is needed for truly immersive VR. And we’re talking about the level of immersivity here where you’re not sure what’s real and what’s VR anymore. And the numbers don’t look that high. Unless we hit a really difficult roadblock, which I think we won’t, it’ll easily happen within our lifetimes.

One of the best write ups of VR experiences I’ve read mentioned that what was truly wow about the whole experience was when someone was mapped into your game, and they picked up a controller or something. The human-like swaying of the avatar of the other player looked disturbingly human. Somehow he just knew it wasn’t just an animation anymore, and this was just from the fact that someone held an object in the air that was mapped into the game.


I’ll just finish this stuff with a mention of two things that could use an entire post by themselves

Adult entertainment — There are a lot of lonely people out there. And this is exactly the kind of thing where VR absolutely shines. I’ve checked out a few experiences for science and they’re promising. They’re not there yet, but they’re really, really promising already.

Arcade — Stuff a bunch of VR goggled and position tracked people into a labyrinth with plastic guns. Do you have any idea how many people go to amusement parks? Only for these VR things smaller venues will suffice than whole parks. You can just emulate the roller coaster.

Given the endless new possibilities VR can offer. Can you really predict right now that given the unprecedented pace at which new inventions are made and put to use, VR is still completely certain to fail this time around? That there is simply no way it could offer something promising enough as it is to prove it’s potential and make people push over the current technical limitations before we get to the truly groundbreaking stuff? If so, I don’t agree with your predictions.

Download, minify & resize images on a different processor core in Node.JS

We’ll explore how to use cluster (from the standard library), GraphicsMagick, and streams to efficiently process images without slowing down or blocking the Node.JS event loop

Skip to the end if you just want a link to the full source!

Background and motivation

When I first showed to the world it slowed some people’s computers to a crawl and some really big images even crashed people’s browsers. Huh. Apparently showing a bunch of original (file) sized, user submitted gifs on the front page isn’t a good idea. Who would’ve thought.

Scaling up the server CPU enough to run both Node.JS and the image processing smoothly on the same processor core would need a ridiculously powerful processor. Actually I’m not sure if such a thing even exists, especially when there starts to be enough traffic for at least one image to be in the middle of processing all the time. Besides, multicore processors are everywhere nowadays.

With having at least one core dedicated entirely to the processing of images, the image processor core can slow down all it wants without it affecting the server in practically any way. This also has the benefit of completely separating the image processor code from everything else, so replacing the worker with something that runs on a completely different machine altogether would actually be an almost trivially easy thing to do.

At the time of writing, the client-side code is still unfinished, but all the relevant parts for this blog post are done, so I thought I’d take a break from coding and write this thing out of the way. Hopefully by the time you’re reading this the front-end is ready as well and the whole feature set deployed live.

Challenges we’ll face


Node.JS handles everything in a single thread by default and processing images is processor heavy and time consuming. If we process the images in the same thread that Node.JS uses for communicating with clients, every time an image is being processed it will slow down how fast the server processes requests. It might even stop responding for a while. This is unacceptable if we want the website to appear snappy from the user’s point of view. You will just lose the user if the first time page load takes 3 seconds.

Thankfully, Node.JS standard library has a thing called cluster that makes setting up a worker thread and IPC surprisingly easy.

File types and sizes

We don’t want to waste any more time than what we absolutely have to with URLs that don’t point to images. We also cannot just download the whole file and then check it’s size or type. What if it’s multiple gigabytes?

Blindly trusting the content-length headers from the server’s response is not a good idea either, the header could be intentionally or unintentionally wrong.

Luckily streams to the rescue.


So what we want to achieve in this chapter is

  • Find out the file type as soon as possible
  • Make sure we don’t download images that are too big
    Like I said earlier, we shouldn’t trust content-length headers alone for the size information. But that doesn’t mean we can’t use them at all. I think the best usage for them is for discarding some URLs before we even start a download.

By the way here’s the stackoverflow answer where I got the download size stuff from. I then added file type checking.

So let’s check the headers with a HEAD request using the always useful request library. I promise we’ll get to the really interesting stuff soon.

var download = function(url, callback) {
    var stream = request({
        url: url,
        method: 'HEAD'
    }, function(err, headRes) {
        if(err) {
            return callback(err);
        var size = headRes.headers['content-length'];
        if (size > maxSize) {
            console.log('Resource size exceeds limit (' + size + ')');
            return callback('image too big');

Note that we haven’t started saving it to a file yet so no abort or unlink is necessary at this point.

As some of you might’ve guessed, I’m using the Node.JS callback style here, where the callback’s first argument is the error argument, which contains the error when there is one, and null when no error occurred.

We’ve decided to download, what’s next?

We should start keeping count of how much have we downloaded, and try to deduce the file type.

Deducing the file type is actually pretty easy using magic numbers. We just get a bunch of file type signature magic numbers from for example here and for the first few bytes of the stream, we look for the magic numbers. If a match is found, we make a note of the file type and continue downloading. Otherwise we quit and remove the few bytes we’ve already downloaded.

var fileTypes = {
    'png': '89504e47',
    'jpg': 'ffd8ffe0',
    'gif': '47494638'
size = 0; //declared in the previous code block

//Generate a random 10 character string
var filename = getName();
var filepath = imagesPath + filename;

//Open up a file stream for writing
var file = fs.createWriteStream(filepath);
var res = request({ url: url});
var checkType = true;
var type = '';

res.on('data', function(data) {

    //Keep track of how much we've downloaded
    size += data.length;

    if(checkType && size >= 4) {
//Wow. WordPress syntax highlighting really breaks badly here.
        var hex = data.toString('hex' , 0, 4);
        for(var key in fileTypes) {
            if(fileTypes.hasOwnProperty(key)) {
                if(hex.indexOf(fileTypes[key]) === 0) {
                    type = key;
                    checkType = false;
        if(!type) {
            //If the type didn't match any of the file types we're looking for,
            //abort the download and remove target file
            return callback('not an image');

    if (size > maxSize) {
        console.log('Resource stream exceeded limit (' + size + ')');
        res.abort(); // Abort the response (close and cleanup the stream)
        fs.unlink(filepath); // Delete the file we were downloading the data to

        //imageTooBig contains a path to a placeholder image for bigger images.
        //Also set shouldProcess to false, we don't want to process the placeholder
        //image later on
        return callback(null, {path: imageTooBig, shouldProcess: false});
}).pipe(file); //Pipe request's stream's output to a file.

//When download has finished, call the callback.
res.on('end', function() {
    callback(null, {filename: filename, shouldProcess: true, type: type});

I encourage you to read the comments for better info on what each line does. If something is still unclear, feel free to ask in the comments section at the end of the article.

File downloaded, let’s process it

The minifying function is pretty straightforward. As to how I came up with it, I googled up the most common ways to reduce file size for all three image types (png, gif, jpg). Most of the results were about ImageMagick so I looked up the GraphicsMagick equivalents, since it’s supposed to be faster in most operations.

For gifs I decided to just grab the first frame (hence the + ‘[0]’ for path), since at I will be setting up a system where mouseovering a gif starts playing the original one.

I also decided to resize the images to 500x500px, but if you don’t want that, you can just remove the .resize(…) line from each case. By the way, the ‘>’ at the end of the resize line means that it won’t resize the image if it’s already smaller than the wanted size.

var thumbnailDimensions = {
    width: 500,
    height: 500
var minifyImage = function(obj, callback) {
    //The downloaded original file was saved without extension
    //Here we save the new processed file with the extension.
    var origPath = imagesPath + obj.filename;
    var path = origPath + '.' + obj.type;
    var filename = obj.filename + '.' + obj.type;
    switch(obj.type) {
        case 'jpg':
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {
        case 'png':
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {
        case 'gif':
            gm(origPath + '[0]')
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {

The result? Even without the resize the file size usually drops by over 50% without too much decrease in quality. This could still be improved a lot, so if you happen to know more about this, please let me and others know in the comments below!

You could also use all other features of the gm library here. For example I’ve been thinking about fusing a barely noticeable and see through shape that looks like a “play” button straight into the gif frame for Or you could add your website’s watermark. Or maybe programmatically generate comics out of a bunch of images? I don’t know.

Putting it all together

Alright, for the last part we’re going to set up the functions for transmitting messages between the main thread and the worker thread. Luckily in Node.JS and cluster this is really easy. Both threads have a function for sending messages to the other, and both threads can set up a function that receives messages from the other. We send and receive normal JavaScript objects. I promised it was going to be easy!

Worker thread

Let’s introduce the final piece of code that belongs in the worker thread side of the system. Remember, this last excerpt and all the ones before it belong to their own file that is dedicated only for the worker process.

process.on('message', function(msg) {
    download(msg.url, function(err, obj) {
        if(err) {
        var resObj = {
            src: msg.url,
            type: obj.type
        if(obj.shouldProcess === true) {
            minifyImage(obj, function(filepath) {
                resObj.thumbnail = filepath;
        else {
            //Use the downloaded file as the thumbnail in this case
            resObj.thumbnail = obj.filename;
  • process.on(…) sets up a new listener function for messages from the main thread.
  • process.send(object) sends messages to the main thread.

Real simple.

Main thread

The code in this section will go to the file or files that you run directly using node (or what is required or otherwise included in those files)

We set up a reference to the worker process and set what file do we run as the worker process.

var cluster = require('cluster');
     exec: __dirname + '/imageProcessWorker.js'

Next, we command cluster.fork() to spawn a new worker process. For this article we only really need one worker, but if you have more than two cores you could spawn more of them and set up a way to decide which worker needs more work, but this is outside the scope of this post.

Also, we set up a message handler that receives objects from the worker thread. As an example, in the message handler at I save the file paths I receive from the worker thread to a database and inform clients about a new media delivery being ready.

var worker = cluster.fork();

worker.on('message', messageHandler);

function messageHandler (msg) {
//Do something with the information from worker thread.

And as the very last code excerpt, processUrl is the function that I call from my actual server code. worker.send(…) is then used to send an object to the worker thread for processing

var processUrl = function(url) {
        url: url

Conclusion– A short summary of the program flow

  1. processUrl is called with a URL. processUrl sends it to the worker thread using worker.send(…)
  2. URL is received at the worker thread at process.on(‘message’, …)
  3. Downloading is considered and possibly attempted at the download function.
  • if it fails or we just don’t want to download it at all, we just stop all processing and never notify the main thread of anything.
  1. If the download succeeded and the shouldProcess variable is set to true, the minifyImage(…) function is called
  • if the image doesn’t need processing, we skip the rest of the steps and just send it to the main thread using process.send(…)
  1. After minifying is done we send the results to the main thread using process.send(…)
  2. Results are received in the messageHandler function, and something is hopefully done with them!

The End

Here are the promised links to the full source:

That should be it. Hopefully it’s of some use to someone. Thanks for reading!

My first demo! My first rant! PlayCanvas is really good!

Okay demo only in the sense it’s not interactive. Not very long either. Perhaps a little childish. Whatever, check it out here and it’s source here. It should run on any modern browser, although on mobile you’ll want to rotate to landscape.

PlayCanvas is cool. It’s like Unity3D except the editor and the end results live in the browser. To my great surprise it’s really good, too. Importing assets is easy, scripting is easy, the scripting documentation is excellent, stuff like creating custom events and programmatically changing materials works logically. A couple of this.entity.model.model.meshInstances style object chains looked a bit awkward, but it was okay with the good docs.

The weirdest thing I encountered was to find a rather new example (from 2014, that’s new, right?), where a function the example used had been deprecated so long ago it didn’t give a warning anymore and just failed and googling the function’s name yielded an another function which had also already been deprecated. At least this one still had a deprecation warning which mentioned the new function’s name. Never encountered a double deprecation before. It even sounds really dirty.

Anyway, the basic workflow I used was to upload my awesome self made .fbx model to the editor and on the code side I set up a Github repository and linked/synced it with the project. A cool thing about the editor is that it’s able to run the project “locally” which means you can serve your scripts from localhost and edit them locally, and still use the web-based editor. Neat. And when the project is finished you can just download a .zip with all the project files ready to be served somewhere with a single unzip. There was some sort of publish option, too, which I didn’t try yet which I’m assuming needs less manual file moving.

PlayCanvas finally made me install Blender. Before today I always sticked to a weird idea(l) where I tried to keep to just coding. Half of it was about hoping someone would someday start doing the modelling part of my (our?) graphics and games projects, the other half was thinking I’ll never be good at anything that has something to do with drawing. Now that I’m learning to draw, too, there’s really no excuse…

Blender’s documentation seems to suck a bit. I ended up at a page a couple of times where instead of seeing the doc I saw a number titled “file count” or something? I also met a useful “hotkey reference” which was a small info box with the text “Hotkey: see below”. Below it was a massive wall of text. I tried ctrl+f:ing ‘hotkey’ and there was one result, which was:

note however that most Blender hotkeys you know in Edit mode do not exist for texts!

t-thanks for that. I never found the hotkey. All the other doc pages I found were mostly just huge lists of hotkeys with really weird descriptions for them.

The only real way to find info was to watch tutorial videos. And who the hell has time for watching tutorial videos? You can’t really skim them, that sucks. No one remembers everything on the first watch/listen/read which is why you need to be able to refer back quickly, preferably through a bookmark or something (okay no one uses bookmarks anymore, maybe a history or another google search). Luckily the videos were rather good so it wasn’t as big a pain as it could have been.

Blender’s UI definitely sucks. There’s often little to no indication what mode you’re currently in and what’s it called so you could at least google how to get back to it when you forget how. I at some point lost the wireframe mode and never found it again. Sometimes things like the overlay that draws a grid and coordinate axes disappears, and the recommended and easiest way to get it back was to open the default project, and then reopen your project with a checkbox unchecked that makes it not load the UI settings from your project. Ugh.

But even with the somewhat crappy UI Blender did leave me with the impression that it’s really powerful when you do know the shortcuts. I heard some schools here in Finland use something like a year just for teaching students how to use it. Makes sense.

Next time I play with PlayCanvas I think I’ll try out something procedural.