Monthly Archives: August 2015

Virtual Reality will not flop — traditional games might not work well, but you’re missing the point

My DK2

My DK2

What I’m responding to with this post are articles like this:

They go on and on about people wanting to just relax on their sofa and play games without turning their heads. I think it’s funny they seem to hate VR for some reason, but only have the easiest arguments to counter. There are some difficult issues to be solved, and they’re mostly related to performance and resolution, but let’s come back to that later and start by destroying the easy one out of the way.


The comfortability battle of using a TV versus using a VR headset is pretty quickly solved in favor of VR by noting that with a headset, you can just lay in your bed or on a sofa in any position you’d like without having to try to keep your eyes towards the TV and in the same orientation as the TV. So it’s kind of like having the TV do the moving for you, not you moving for the TV. To combat the nausea from having a still image taped to your face, I’m imagining an animation where after you stop turning your head when you’ve found a good new position the virtual screen in your watching application will grow legs and walk and reposition itself at the direction you’re looking at now.

This fact alone already enables you to ditch the awkward holy trinity of a TV, sofa, and a coffee table, and clear some space for the other nice stuff in your apartment (like more sofas). With the goggles you can set up a makeshift digital entertainment observing station anywhere you’ve placed something soft. Alternate between standing and standing on your head for all I care, this time the picture won’t be upside down half the time.

As for the weight of the goggles, they’re pretty light and well-balanced. Also luckily your head is made of bone so you can strap the headset pretty darn tight before it feels uncomfortable. This makes it feel like a part of your head instead of something hanging from your face. And if your head is already heavy, just try faceplanting into your bed once in a while.

Completely new ways to work

So far we’ve mostly talked about comfortability when doing passive things like watching TV series etc. Let’s tackle some other applications. As someone who lives and breathes programming, that’s where I could use new groundbreaking stuff the most. So let’s try to think of something where VR could help programming.

As we know, as a programmer you usually have a bunch of browser windows and an editor open. What do you call that setup for a single project? Let’s call it a workspace. Now let’s draw inspiration from the old physical counterpart of a workspace. Let’s imagine you had a mansion in which you have a separate room for every project where tools and materials are stored and can be left where you last used them. Let’s bring that to VR.

Make the corner of your living room where you have that nice big comfy armchair next to the bar the place where you configure your project because fighting with that custom DSL simply sucks. You can just leave the browser window with the page open where that obscure program’s weird switches are documented floating to the left and the config file to the right. Configure the s*** out of that and when you’re done, walk to your sofa. Looks like this is where you worked on the server backend and were a bit messy plus you started to just watch Netflix at some point. Let’s throw the Netflix back next to your bed where it belongs and continue working. I could just keep going. but I think you got the point.

If you’re not a programmer, think about sculpting. Think about planning a city or a home where you can actually walk inside the model with things in their correct proportions.

We do have to step on the brakes for a while though, there are some technical challenges that need to be overcome before this is a reality. The good news is: we are not far. From playing with my DK2 I think the biggest problem is that text isn’t really readable at current resolutions, which for the DK2 was 1920×1080 split for two eyes. There already are 6″ 4k displays. It’s just a matter of time before they reach headsets. The first mobile phones with 1080p resolution reached the market sometime around 2012 and DK2 was released in 2014.

Games and interactive entertainment

Graphical performance, especially for games is an issue. For this we have an old saying here in Finland among computer enthusiasts:


(okay it could be an international thing as well, I haven’t visited demoparties outside Finland)

For those not in the know, it’s what people in the demoscene shout randomly at demoparties because they like Amiga and they’re drunk.

What I’m getting at, is we’ve dealed with low pixel counts, slow performance, lack of bandwidth, too small memory, etc. problems for a while already. And the only thing it took to get past those was time and effort. Only this time around we’ve already done it once and it’ll take less time. Also this time we’ll have the resolution, we just won’t have the performance to put anything complex into those pixels. So we can just do simple stuff while we wait for the performance to get where it needs to be, it already looks good though when you style it enough. Minecraft was my favorite VR experience on the DK2 by the way.

By the way, for example AMD has made estimations on what is needed for truly immersive VR. And we’re talking about the level of immersivity here where you’re not sure what’s real and what’s VR anymore. And the numbers don’t look that high. Unless we hit a really difficult roadblock, which I think we won’t, it’ll easily happen within our lifetimes.

One of the best write ups of VR experiences I’ve read mentioned that what was truly wow about the whole experience was when someone was mapped into your game, and they picked up a controller or something. The human-like swaying of the avatar of the other player looked disturbingly human. Somehow he just knew it wasn’t just an animation anymore, and this was just from the fact that someone held an object in the air that was mapped into the game.


I’ll just finish this stuff with a mention of two things that could use an entire post by themselves

Adult entertainment — There are a lot of lonely people out there. And this is exactly the kind of thing where VR absolutely shines. I’ve checked out a few experiences for science and they’re promising. They’re not there yet, but they’re really, really promising already.

Arcade — Stuff a bunch of VR goggled and position tracked people into a labyrinth with plastic guns. Do you have any idea how many people go to amusement parks? Only for these VR things smaller venues will suffice than whole parks. You can just emulate the roller coaster.

Given the endless new possibilities VR can offer. Can you really predict right now that given the unprecedented pace at which new inventions are made and put to use, VR is still completely certain to fail this time around? That there is simply no way it could offer something promising enough as it is to prove it’s potential and make people push over the current technical limitations before we get to the truly groundbreaking stuff? If so, I don’t agree with your predictions.

Download, minify & resize images on a different processor core in Node.JS

We’ll explore how to use cluster (from the standard library), GraphicsMagick, and streams to efficiently process images without slowing down or blocking the Node.JS event loop

Skip to the end if you just want a link to the full source!

Background and motivation

When I first showed to the world it slowed some people’s computers to a crawl and some really big images even crashed people’s browsers. Huh. Apparently showing a bunch of original (file) sized, user submitted gifs on the front page isn’t a good idea. Who would’ve thought.

Scaling up the server CPU enough to run both Node.JS and the image processing smoothly on the same processor core would need a ridiculously powerful processor. Actually I’m not sure if such a thing even exists, especially when there starts to be enough traffic for at least one image to be in the middle of processing all the time. Besides, multicore processors are everywhere nowadays.

With having at least one core dedicated entirely to the processing of images, the image processor core can slow down all it wants without it affecting the server in practically any way. This also has the benefit of completely separating the image processor code from everything else, so replacing the worker with something that runs on a completely different machine altogether would actually be an almost trivially easy thing to do.

At the time of writing, the client-side code is still unfinished, but all the relevant parts for this blog post are done, so I thought I’d take a break from coding and write this thing out of the way. Hopefully by the time you’re reading this the front-end is ready as well and the whole feature set deployed live.

Challenges we’ll face


Node.JS handles everything in a single thread by default and processing images is processor heavy and time consuming. If we process the images in the same thread that Node.JS uses for communicating with clients, every time an image is being processed it will slow down how fast the server processes requests. It might even stop responding for a while. This is unacceptable if we want the website to appear snappy from the user’s point of view. You will just lose the user if the first time page load takes 3 seconds.

Thankfully, Node.JS standard library has a thing called cluster that makes setting up a worker thread and IPC surprisingly easy.

File types and sizes

We don’t want to waste any more time than what we absolutely have to with URLs that don’t point to images. We also cannot just download the whole file and then check it’s size or type. What if it’s multiple gigabytes?

Blindly trusting the content-length headers from the server’s response is not a good idea either, the header could be intentionally or unintentionally wrong.

Luckily streams to the rescue.


So what we want to achieve in this chapter is

  • Find out the file type as soon as possible
  • Make sure we don’t download images that are too big
    Like I said earlier, we shouldn’t trust content-length headers alone for the size information. But that doesn’t mean we can’t use them at all. I think the best usage for them is for discarding some URLs before we even start a download.

By the way here’s the stackoverflow answer where I got the download size stuff from. I then added file type checking.

So let’s check the headers with a HEAD request using the always useful request library. I promise we’ll get to the really interesting stuff soon.

var download = function(url, callback) {
    var stream = request({
        url: url,
        method: 'HEAD'
    }, function(err, headRes) {
        if(err) {
            return callback(err);
        var size = headRes.headers['content-length'];
        if (size > maxSize) {
            console.log('Resource size exceeds limit (' + size + ')');
            return callback('image too big');

Note that we haven’t started saving it to a file yet so no abort or unlink is necessary at this point.

As some of you might’ve guessed, I’m using the Node.JS callback style here, where the callback’s first argument is the error argument, which contains the error when there is one, and null when no error occurred.

We’ve decided to download, what’s next?

We should start keeping count of how much have we downloaded, and try to deduce the file type.

Deducing the file type is actually pretty easy using magic numbers. We just get a bunch of file type signature magic numbers from for example here and for the first few bytes of the stream, we look for the magic numbers. If a match is found, we make a note of the file type and continue downloading. Otherwise we quit and remove the few bytes we’ve already downloaded.

var fileTypes = {
    'png': '89504e47',
    'jpg': 'ffd8ffe0',
    'gif': '47494638'
size = 0; //declared in the previous code block

//Generate a random 10 character string
var filename = getName();
var filepath = imagesPath + filename;

//Open up a file stream for writing
var file = fs.createWriteStream(filepath);
var res = request({ url: url});
var checkType = true;
var type = '';

res.on('data', function(data) {

    //Keep track of how much we've downloaded
    size += data.length;

    if(checkType && size >= 4) {
//Wow. WordPress syntax highlighting really breaks badly here.
        var hex = data.toString('hex' , 0, 4);
        for(var key in fileTypes) {
            if(fileTypes.hasOwnProperty(key)) {
                if(hex.indexOf(fileTypes[key]) === 0) {
                    type = key;
                    checkType = false;
        if(!type) {
            //If the type didn't match any of the file types we're looking for,
            //abort the download and remove target file
            return callback('not an image');

    if (size > maxSize) {
        console.log('Resource stream exceeded limit (' + size + ')');
        res.abort(); // Abort the response (close and cleanup the stream)
        fs.unlink(filepath); // Delete the file we were downloading the data to

        //imageTooBig contains a path to a placeholder image for bigger images.
        //Also set shouldProcess to false, we don't want to process the placeholder
        //image later on
        return callback(null, {path: imageTooBig, shouldProcess: false});
}).pipe(file); //Pipe request's stream's output to a file.

//When download has finished, call the callback.
res.on('end', function() {
    callback(null, {filename: filename, shouldProcess: true, type: type});

I encourage you to read the comments for better info on what each line does. If something is still unclear, feel free to ask in the comments section at the end of the article.

File downloaded, let’s process it

The minifying function is pretty straightforward. As to how I came up with it, I googled up the most common ways to reduce file size for all three image types (png, gif, jpg). Most of the results were about ImageMagick so I looked up the GraphicsMagick equivalents, since it’s supposed to be faster in most operations.

For gifs I decided to just grab the first frame (hence the + ‘[0]’ for path), since at I will be setting up a system where mouseovering a gif starts playing the original one.

I also decided to resize the images to 500x500px, but if you don’t want that, you can just remove the .resize(…) line from each case. By the way, the ‘>’ at the end of the resize line means that it won’t resize the image if it’s already smaller than the wanted size.

var thumbnailDimensions = {
    width: 500,
    height: 500
var minifyImage = function(obj, callback) {
    //The downloaded original file was saved without extension
    //Here we save the new processed file with the extension.
    var origPath = imagesPath + obj.filename;
    var path = origPath + '.' + obj.type;
    var filename = obj.filename + '.' + obj.type;
    switch(obj.type) {
        case 'jpg':
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {
        case 'png':
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {
        case 'gif':
            gm(origPath + '[0]')
                .resize(thumbnailDimensions.width, thumbnailDimensions.height + '>')
                .write(path, function(err) {
                    if(err) {
                    else {

The result? Even without the resize the file size usually drops by over 50% without too much decrease in quality. This could still be improved a lot, so if you happen to know more about this, please let me and others know in the comments below!

You could also use all other features of the gm library here. For example I’ve been thinking about fusing a barely noticeable and see through shape that looks like a “play” button straight into the gif frame for Or you could add your website’s watermark. Or maybe programmatically generate comics out of a bunch of images? I don’t know.

Putting it all together

Alright, for the last part we’re going to set up the functions for transmitting messages between the main thread and the worker thread. Luckily in Node.JS and cluster this is really easy. Both threads have a function for sending messages to the other, and both threads can set up a function that receives messages from the other. We send and receive normal JavaScript objects. I promised it was going to be easy!

Worker thread

Let’s introduce the final piece of code that belongs in the worker thread side of the system. Remember, this last excerpt and all the ones before it belong to their own file that is dedicated only for the worker process.

process.on('message', function(msg) {
    download(msg.url, function(err, obj) {
        if(err) {
        var resObj = {
            src: msg.url,
            type: obj.type
        if(obj.shouldProcess === true) {
            minifyImage(obj, function(filepath) {
                resObj.thumbnail = filepath;
        else {
            //Use the downloaded file as the thumbnail in this case
            resObj.thumbnail = obj.filename;
  • process.on(…) sets up a new listener function for messages from the main thread.
  • process.send(object) sends messages to the main thread.

Real simple.

Main thread

The code in this section will go to the file or files that you run directly using node (or what is required or otherwise included in those files)

We set up a reference to the worker process and set what file do we run as the worker process.

var cluster = require('cluster');
     exec: __dirname + '/imageProcessWorker.js'

Next, we command cluster.fork() to spawn a new worker process. For this article we only really need one worker, but if you have more than two cores you could spawn more of them and set up a way to decide which worker needs more work, but this is outside the scope of this post.

Also, we set up a message handler that receives objects from the worker thread. As an example, in the message handler at I save the file paths I receive from the worker thread to a database and inform clients about a new media delivery being ready.

var worker = cluster.fork();

worker.on('message', messageHandler);

function messageHandler (msg) {
//Do something with the information from worker thread.

And as the very last code excerpt, processUrl is the function that I call from my actual server code. worker.send(…) is then used to send an object to the worker thread for processing

var processUrl = function(url) {
        url: url

Conclusion– A short summary of the program flow

  1. processUrl is called with a URL. processUrl sends it to the worker thread using worker.send(…)
  2. URL is received at the worker thread at process.on(‘message’, …)
  3. Downloading is considered and possibly attempted at the download function.
  • if it fails or we just don’t want to download it at all, we just stop all processing and never notify the main thread of anything.
  1. If the download succeeded and the shouldProcess variable is set to true, the minifyImage(…) function is called
  • if the image doesn’t need processing, we skip the rest of the steps and just send it to the main thread using process.send(…)
  1. After minifying is done we send the results to the main thread using process.send(…)
  2. Results are received in the messageHandler function, and something is hopefully done with them!

The End

Here are the promised links to the full source:

That should be it. Hopefully it’s of some use to someone. Thanks for reading!

My first demo! My first rant! PlayCanvas is really good!

Okay demo only in the sense it’s not interactive. Not very long either. Perhaps a little childish. Whatever, check it out here and it’s source here. It should run on any modern browser, although on mobile you’ll want to rotate to landscape.

PlayCanvas is cool. It’s like Unity3D except the editor and the end results live in the browser. To my great surprise it’s really good, too. Importing assets is easy, scripting is easy, the scripting documentation is excellent, stuff like creating custom events and programmatically changing materials works logically. A couple of this.entity.model.model.meshInstances style object chains looked a bit awkward, but it was okay with the good docs.

The weirdest thing I encountered was to find a rather new example (from 2014, that’s new, right?), where a function the example used had been deprecated so long ago it didn’t give a warning anymore and just failed and googling the function’s name yielded an another function which had also already been deprecated. At least this one still had a deprecation warning which mentioned the new function’s name. Never encountered a double deprecation before. It even sounds really dirty.

Anyway, the basic workflow I used was to upload my awesome self made .fbx model to the editor and on the code side I set up a Github repository and linked/synced it with the project. A cool thing about the editor is that it’s able to run the project “locally” which means you can serve your scripts from localhost and edit them locally, and still use the web-based editor. Neat. And when the project is finished you can just download a .zip with all the project files ready to be served somewhere with a single unzip. There was some sort of publish option, too, which I didn’t try yet which I’m assuming needs less manual file moving.

PlayCanvas finally made me install Blender. Before today I always sticked to a weird idea(l) where I tried to keep to just coding. Half of it was about hoping someone would someday start doing the modelling part of my (our?) graphics and games projects, the other half was thinking I’ll never be good at anything that has something to do with drawing. Now that I’m learning to draw, too, there’s really no excuse…

Blender’s documentation seems to suck a bit. I ended up at a page a couple of times where instead of seeing the doc I saw a number titled “file count” or something? I also met a useful “hotkey reference” which was a small info box with the text “Hotkey: see below”. Below it was a massive wall of text. I tried ctrl+f:ing ‘hotkey’ and there was one result, which was:

note however that most Blender hotkeys you know in Edit mode do not exist for texts!

t-thanks for that. I never found the hotkey. All the other doc pages I found were mostly just huge lists of hotkeys with really weird descriptions for them.

The only real way to find info was to watch tutorial videos. And who the hell has time for watching tutorial videos? You can’t really skim them, that sucks. No one remembers everything on the first watch/listen/read which is why you need to be able to refer back quickly, preferably through a bookmark or something (okay no one uses bookmarks anymore, maybe a history or another google search). Luckily the videos were rather good so it wasn’t as big a pain as it could have been.

Blender’s UI definitely sucks. There’s often little to no indication what mode you’re currently in and what’s it called so you could at least google how to get back to it when you forget how. I at some point lost the wireframe mode and never found it again. Sometimes things like the overlay that draws a grid and coordinate axes disappears, and the recommended and easiest way to get it back was to open the default project, and then reopen your project with a checkbox unchecked that makes it not load the UI settings from your project. Ugh.

But even with the somewhat crappy UI Blender did leave me with the impression that it’s really powerful when you do know the shortcuts. I heard some schools here in Finland use something like a year just for teaching students how to use it. Makes sense.

Next time I play with PlayCanvas I think I’ll try out something procedural.

How computer experts (don’t) play music from their computers

It’s 9AM and you’ve just woken up and drank some possibly expired pomegranate juice because it’s the only drink you have. And you’re so thirsty you don’t even really taste it so it’s fine. Your head hurts. Hangover.

So first you slouch to the closest shop for frozen pizza, tons of orange juice, and sprite. At the till, the familiar clerk chuckles a bit, probably at the combination of what you’re buying, the fact that paying seems challenging for you today, and your hangovery face.

You get home, you’d like some mellow music so you boot up your desktop. The highlight of your day so far becomes nailing on the first try the difficult task of selecting Windows on the boot menu before it autoboots to your broken Linux installation. It’s such a great victory you decide to share it with someone.

While typing, Windows rebooted the computer because of updates, and it’s Linux after all. You sigh, press the power button, try again. The computer now freezes in the BIOS splash screen. And continues to do so despite multiple reboot attempts. It did serve without a hiccup for over 6 years, so that’s like, over 2000 days? But today it had enough.

No worries, you have a laptop, you’ll use that. Right, so you only have Arch Linux on it because: reasons. You plug the USB cord in, hit play and… The sound is coming from the laptop’s speakers. And you couldn’t think of anything better to play than Wonderwall since someone sang it last night and it’s playing in your head. Great.

Somehow you can now taste the pomegranate juice you didn’t taste when you drank it an hour earlier.

Changing the audio output device is surprisingly difficult when you’ve installed all audio drivers you could find since you had no idea what the buggy and obscure program you found earlier was able to use for midi playback. So you don’t know which one is currently being used and thus where the settings are. You feel, possibly are, stupid.

You decide your today’s (ad)ventures are asdgkdjfga enough to make it your second blog post, so you write that, and head back to bed hoping beginning your day in the evening will work out better.

mkdir blog


After realizing I want to write about more things than just past programming projects, it’s time to try out something a bit more free-form.

This time I’m not going to artificially limit the topics, and I’ll just make this a place for me to pour out thoughts about everything I find interesting into something that is easier to read than the random monologues in IRC I’m used to doing.

About me

At the time of writing, I’m a 23-year-old computer science student from Finland. Fall of 2015 will mark the beginning of my fourth year of studies at the Helsinki University.

Those studies have proceeded rather slowly, I’m somewhere around halfway through to a B.S.c. The reason for slow progress is probably the fact that I’ve spent more time on my own programming projects than on university courses. I’m hoping to change the ratio a bit at some point, but so far it has worked out rather nicely for me. For example, I’m more than happy with the jobs I’ve landed, the latest of which is…

A job at Automattic

The company behind WordPress. Incidentally the very software I’m using to write this blog. Last week was my first week as an Automattician, so I’m currently in the middle of the 3 week support rotation everyone at the company does at the beginning. So right now you have the (probably) very rare chance to get your support requests answered by yours truly.

To condense the job into four points:

  1. 100% remote
  2. Working at the time of day that works for me
  3. Open source
  4. Awesome people

What’s next?

I just this evening realized I want a website where I can enter the name of a city, and receive a map view or at least a list of some sort where you have internet cafés ranked and filtered by their suitability for working. I sometimes would like to find for example a place with wall outlets available. And decent coffee wouldn’t hurt. Someone else might like to compare coffee prices and find a place with good WiFi. I couldn’t find a website that does this, so I’m thinking maybe I should make one. We’ll see.

From the old blog I’ll probably move over at least the node.js multithreaded image processing post, so that’ll kick off the technical side.

My sleep schedule is effed once again. Time to sleep. #justfreeworkingtimeproblems