Category Archives: Graphics

Procedural Mesh Generation from Live Audio in Unreal Engine 4

Source: https://github.com/Tsarpf/UE4-Procedural-Audio

Here are two example videos of it running live where at the beginning I just press play in Foobar on Windows, and the visualization begins. Spotify, Youtube, or any other audio source on Windows works straight away as well.

A word of warning though, it’s only a proof of concept, so it is not stable! In-editor it generally works well, but sometimes crashes. The program also does not release all the memory that it should. Due to a yet unknown reason the standalone version basically doesn’t work at all.

Thanks to other open source projects

For the mesh generation part I got a lot of help from SiggiG’s procedural UE4 project/tutorial which lives here: https://github.com/SiggiG/ProceduralMeshes

Some of the code for figuring out frequencies from audio chunks is from eXi’s sound visualization plugin, especially the original CalculateFrequencySpectrum function that can be found here, and his use of the library KissFFT.

Because I’m proud that I was able to figure out the frequency calculating part myself as well, I want to add that I have my own implementation for it (with the help of the library “ffft”), but for this project I overwrote that with eXi’s solution to rule out bugs in that area.

A very brief and dense overview on how it works

On Windows, we’re in luck because we’ve already done the audio capture part, now we just direct it towards UE4 instead of a file. An audio sink for the audio capturer receives chunks of audio frames from the capturer, and the audio listener itself is ran in it’s own thread within the visualizer process. Now that chunks of audio are popping into a queue from which we can dequeue them in the UE4 main thread, we can calculate the sound spectrums for each audio chunk we receive. Finally on each game tick we fetch a list of new frequencies, and if found, add those to the mesh, and move the camera forward to keep up.

Feel free to ask me on Twitter or in the comments if something is unclear!

Making it work on platforms other than Windows

On Linux capturing the audio should be very easy for example by directing arecord ‘s standard output to the UE4 program’s standard input and going forward from there, but haven’t gotten around to trying that yet. On OSX I would start searching for a solution with the help of the project “Soundflower”, but I’m not sure how easy that will be.

Thanks!

Maybe the proof of concept gives someone an idea for something awesome. Please make a new Audiosurf that takes in live audio and doesn’t need to process the whole song from a file first. Or make the sound waves collideable and get a some sort of game mechanic out of that?

3D infinite terrain generation in JavaScript using Marching Cubes and PlayCanvas

landscape

Landscape kind of thing

Arrow keys move, mouse looks. Try it out here

3516f2f284

Alien floating things and holes in the ‘ground’

Arrow keys move, mouse looks. Try this one out here.

I’ve made a presentation or “slides” about the same subject, and you can check it out in video form below. It’s rather slow paced because it’s supposed to have a talk accompanied with it, but the visualizations might be helpful.

Let’s get technical — what goes into building something like this?

The main components of the system are:

  1. A coherent noise function from which we can sample a 3D field (array) of density values
  2. Marching Cubes algorithm for creating surfaces at the boundaries of a chosen density value
  3. Shading the resulting geometry
  4. Loading (and unloading) the infinite map in a bunch of small chunks
  5. Balancing the loading (and unloading) over multiple frames to have a smoother fps

The project’s full source can be found here: https://github.com/Tsarpf/proced

Let’s go through the components with a little bit more detail

Noise functions

Noise functions in this context are basically functions that take in coordinates in some number of dimensions, and give out a single floating point value. So a 3 dimensional noise function could look something like this:

function(x,y,z) {
   ...
   return a;
}

Now, if the return values of our noise function were completely random, say, the function’s results were from a call to Math.rand, if we tried to use those values as a base for our terrain generation, we would end up with something that resembles a post modern art piece, not rolling hills or mountains.

What we need is a coherent noise function. With coherent noise functions, there’s a promise that when the input values vary by a little bit, the output values vary only by a little bit as well. This means there are gradual transitions between spaces where there are solid things and where there are not.

Surfaces from noise

At least at the moment, I use the marching cubes algorithm for creating surfaces out of noise. The algorithm has been explained way better than I can in so many places already that I’m going to just encourage you to check the wikipedia page and Paul Bourke’s Polygonising a scalar field to start learning more about it.

If you think there’s still use for yet an another explanation of it, please leave a comment or ping me at twitter, and I might do a separate post about it.

Also feel free to copy and/or use my JS port of Paul Bourke’s code.

Shading the surfaces

In order for the geometry we’ve generated to look 3D, it needs shading. This means we need surface normals. There are two different approaches I know of for finding them.

  1. A gradient for a point in a field is the direction where the field’s values change the most from the current position. When surfaces are generated to go along a density level boundary, they are perpendicular to the gradients. This means the gradients are actually the same thing as normals. We can approximate a gradient for a point by sampling the density values surrounding it.

Gradient2

Picture from the Wikipedia article about gradients. The blue arrows show the direction of the color gradient.


here’s some sample code:

function getNormalForVertex(x, y, z, sampler, outObj) {
	outObj.x = sampler(x + dataStep.x, y, z) - sampler(x - dataStep.x, y , z);
	outObj.y = sampler(x, y + dataStep.y, z) - sampler(x, y - dataStep.y , z);
	outObj.z = sampler(x, y, z + dataStep.z) - sampler(x, y , z - dataStep.z);
}

dataStep.x/y/z needs to be some relatively small number. If it’s too big, the normal (gradient) approximation is too coarse and is wrong for small surface details. If it’s too small, it takes into accord too small changes and big surfaces can get weird normals.

I was never able to get good looking normals for both big and small features this way so I abandoned it and tried another way:

  1. For each triangle, get the triangle’s normal by taking the cross-product of two vectors formed from the triangle’s vertices. These normals alone provide some basic shading, and the method is often called `face normals`. Since using only face normals we would have sudden changes of color at triangle borders, we want to do a bit better than that. So for each vertex, we combine it with other vertices that have the same position, and get an average normal from all the triangles it belongs to where each triangle normal is weighted by the triangle’s size.

This gets us some pretty decent looking shading. IIRC it was something like 10% faster, too, but nothing drastic.

One bug or problem I haven’t yet had the time to solve with shading is that the vertices are only combined within chunks (discussed below), so at chunk borders the shading still isn’t smooth.

Splitting the world into smaller pieces for loading them separately

These pieces are often called “chunks” in infinite map generators. By loading the world in chunks we don’t have to load everything at once, and can do tricks like prioritizing the loading of chunks closer to the player over farther away ones. Chunks also enable us to even out the CPU stress caused by loading new chunks over as many frames as we’d like.

My approach for the data structure to save these chunks to was to write a 3D circular buffer. This means old and far away chunks eventually get overwritten. You can find my implementation here and some mocha tests for it here

Further, my circular buffer, or wrapping array as I call it, uses “zones” which tell how many chunks away a given bunch of chunks is from the player, from 0 to WorldChunkCountPerAxis / 2 (since the player is set to be at the center)

ugly-array-pic

An ugly 1D representation of the zone system

The zones are implemented as “zone functions” which can be set to call a specific function for every chunk that enters a given zone every time the players chunk position changes.

At the moment it also has a different function for the case where the chunks enter a zone caused by the player coming closer to them, and a different one for when the chunk enters the zone by “falling behind” or going farther away from the player. This separation could be used to unload a chunk or decrease LOD when the zone is entered backwards and load or increase LOD when entered forwards, but I’m probably removing this feature soon and instead going to use the same function for all chunks in the zone, and handling all unloading etc. when an array index gets overwritten instead.

Here’s an example of setting up a zone function:

var size = 7; // world chunk count per axis
var zoneCount = Math.ceil(size / 2); // how many zones in the world
var wrappingArray = PROCED.wrappingArray(size); // initialize wrapping array

wrappingArray.setZoneFunction(zoneCount - 1, function (arrayCell, worldCoords) {
    queue.push({
        type: 'draw',
        arrayCell: arrayCell,
        worldCoords: worldCoords
    }, 1); //priority 1 for all chunk drawings for now
}, function() {});

The first argument of setZoneFunction is the zone (represented by a single integer) we want the following functions to be used for. At the moment while the forward/backward separation is still there, the forwards function is the second argument and the third is the backwards function (which is an empty function in this case)

Now every time one of the six direction functions (x/y/z axis, +/- direction) of the wrapping array is called (f.ex triggered by the player moving in one of the directions over a chunk border), it calls the function for every chunk that entered the zone at the edge of the world (which is the zone zoneCount - 1) “forwards”

In my current implementation it pushes to a work queue a “draw” order for each of these chunks that newly entered the zone, where the chunk to be drawn should be placed in the cell arrayCell in our wrapped array (which is a 3D array flattened to 1D, see here), and the chunk drawn there should have the non-wrapped aka “world coordinates” specified by worldCoords.

Balancing loading

I use Async.js‘s priority queue implementation to queue the processing of chunks. I use something like 1-2 workers for the queue, which means it takes 1 or 2 chunks from the queue per frame. When the processing orders come come out of the queue, the processing is delayed once more by asking for free CPU time time by calling requestAnimationFrame. I’m not sure if it’s the best way to do it, but it seems at least to be faster than using setTimeout with zero timeout. Please leave a comment if you know more about this 🙂

I also haven’t yet had the time to investigate multicore processing using web workers, there definitely could be some performance gains to be found there.

Conclusion

A lot could be improved in the project at its current state, and it doesn’t even have textures or collision yet, but I hope this helps someone in getting started with building something procedural for the web!

I might write a follow-up post on this someday if I’ve come up with major improvements.

Thanks for reading!

My first demo! My first rant! PlayCanvas is really good!

Okay demo only in the sense it’s not interactive. Not very long either. Perhaps a little childish. Whatever, check it out here and it’s source here. It should run on any modern browser, although on mobile you’ll want to rotate to landscape.

PlayCanvas is cool. It’s like Unity3D except the editor and the end results live in the browser. To my great surprise it’s really good, too. Importing assets is easy, scripting is easy, the scripting documentation is excellent, stuff like creating custom events and programmatically changing materials works logically. A couple of this.entity.model.model.meshInstances style object chains looked a bit awkward, but it was okay with the good docs.

The weirdest thing I encountered was to find a rather new example (from 2014, that’s new, right?), where a function the example used had been deprecated so long ago it didn’t give a warning anymore and just failed and googling the function’s name yielded an another function which had also already been deprecated. At least this one still had a deprecation warning which mentioned the new function’s name. Never encountered a double deprecation before. It even sounds really dirty.

Anyway, the basic workflow I used was to upload my awesome self made .fbx model to the editor and on the code side I set up a Github repository and linked/synced it with the project. A cool thing about the editor is that it’s able to run the project “locally” which means you can serve your scripts from localhost and edit them locally, and still use the web-based editor. Neat. And when the project is finished you can just download a .zip with all the project files ready to be served somewhere with a single unzip. There was some sort of publish option, too, which I didn’t try yet which I’m assuming needs less manual file moving.

PlayCanvas finally made me install Blender. Before today I always sticked to a weird idea(l) where I tried to keep to just coding. Half of it was about hoping someone would someday start doing the modelling part of my (our?) graphics and games projects, the other half was thinking I’ll never be good at anything that has something to do with drawing. Now that I’m learning to draw, too, there’s really no excuse…

Blender’s documentation seems to suck a bit. I ended up at a page a couple of times where instead of seeing the doc I saw a number titled “file count” or something? I also met a useful “hotkey reference” which was a small info box with the text “Hotkey: see below”. Below it was a massive wall of text. I tried ctrl+f:ing ‘hotkey’ and there was one result, which was:

note however that most Blender hotkeys you know in Edit mode do not exist for texts!

t-thanks for that. I never found the hotkey. All the other doc pages I found were mostly just huge lists of hotkeys with really weird descriptions for them.

The only real way to find info was to watch tutorial videos. And who the hell has time for watching tutorial videos? You can’t really skim them, that sucks. No one remembers everything on the first watch/listen/read which is why you need to be able to refer back quickly, preferably through a bookmark or something (okay no one uses bookmarks anymore, maybe a history or another google search). Luckily the videos were rather good so it wasn’t as big a pain as it could have been.

Blender’s UI definitely sucks. There’s often little to no indication what mode you’re currently in and what’s it called so you could at least google how to get back to it when you forget how. I at some point lost the wireframe mode and never found it again. Sometimes things like the overlay that draws a grid and coordinate axes disappears, and the recommended and easiest way to get it back was to open the default project, and then reopen your project with a checkbox unchecked that makes it not load the UI settings from your project. Ugh.

But even with the somewhat crappy UI Blender did leave me with the impression that it’s really powerful when you do know the shortcuts. I heard some schools here in Finland use something like a year just for teaching students how to use it. Makes sense.

Next time I play with PlayCanvas I think I’ll try out something procedural.