I’ve been neglecting the level editor a little bit. It didn’t get fully updated with IMGUI, mainly because it was a mess. It had gone through the entity system development with only quick updates here and there and it showed.
Last night I decided it was finally time to dig in. After an hour or two of cleanup and refactoring, things started looking much better. I wrote new entity & variant selection lists and fixed a few bugs related to Box2D. Overall line count went down, which is always a good sign. So much for custom UI components!
During a recent rewrite of the menu system, it occurred to me it would be nice to have a visual preview of the various animation easing curves available in the engine.
So I made a tool to do that. Pick an easing function, see the curve and an animation showing its effect on alpha, scale and position. 20 minutes, 150 lines of code.
This will make it much easier to pick the right function when creating new UI and in-game animations.
Luckily I’m done with the menus for now. If I ever need to add more polish, I might attach this new interface directly to the actual element being animated. That would make this a real WYSIWYG tool.
This was a big one. I switched to Box2D for collision detection & physics.
My old system was nothing fancy: axis-aligned bounding boxes, a “sliding” mechanism for collisions, sector based space subdivision. Worked well for Superforce. However there’s a lot more happening in Devastro 2. More bullets, more particles, more enemies, more complicated level design, player motion needs to be fluid around obstacles etc. Box2D seems like the right tool for that job.
First I had to rip out large amounts of old code both in the game and the editor. Everything was completely broken for over a week. Fortunately the Box2D integration went well and the game is now back up and running.
The Box2D API is nice and there’s plenty of documentation & example code.
To improve rendering performance the game builds a texture atlas from all the images. This process is quick enough to be done at load time. I wanted to have a clear idea about the atlas layout. Xcode does have some support for viewing OpenGL textures but it’s been crashing on me almost every time I used that feature. So I did what I usually do when a 3rd party tool fails me: I built my own inspection tool to look at the OpenGL textures directly in-game.
Even in its initial simple form it has already helped fix a few issues.
I’m planning to extend it with zoom & pan and a way to track usage of the individual images in the atlas.
As I keep adding assets to the game, the load time keeps increasing. Who would have thought? A 5 second startup may not sound like much but it quickly adds up during development.
One way to fix it would be to load assets on demand. This might cause stuttering, but that’s OK for development. However, improving the load time would benefit the final game as well. Let’s do that!
I added a job system that allows the preload to happen on multiple threads. Load time: 1.5 seconds. Nice! Now that I’ve “jobified” the tasks that need to run, further improvements can be made with relatively little effort.
The main menu of the game is going to have an animated background. Currently I’m thinking it could be the end of the original trailer put on loop:
As a temporary playback solution I used a series of JPEG files. Not very efficient. When looking for a decent video playback library, I found this one:
A brand new “single-header” library for decoding MPEG1 video, by Dominic Szablewski. Simple API, easy to integrate. It’s great!
In fact, I spent more time looking for my .blend source file than writing code to use the library. In Blender, I switched render output from image to video and set the codec to MPEG1. It all worked the first time I ran the game.
Built a simple particle editor. I had been using Particle Designer previously, which is a nice app but its PEX format has some limitations I didn’t like. So I used PEX as a starting point to build my own editor and will extend it later.
The new Blender 2.8 is great. Unfortunately it broke my batch render script.
Fixing the script and the rendering setup were quite a challenge due to the major UI changes in 2.8, but eventually I figured it out. Even learned a few new tricks along the way. How did I manage to write the original script without the built-in scripting console? So helpful!
My Blender export pipeline looks like this:
- master.blend file with fixed camera & lighting setup
- extra .blend file for each asset
- link assets into master
- for each asset, render object & shadow into separate PNG
First draft of the new script for 2.8:
# Requires Blender 2.8
# /Applications/blender.app/Contents/MacOS/Blender -b -P render28.py
# Node setup: see master.blend -> Compositing
# Render menu -> Film -> transparent
# Shadow catcher plane -> Object menu -> Visibility -> Shadow catcher
# Adding more objects: File -> Link... -> "Collection" from ext. file -> Move to Assets collection in master
basepath = os.path.normpath(os.path.dirname(os.path.abspath(__file__)))
master_blend = "blender28-batch-render-test-master.blend"
src = os.path.join(basepath, master_blend)
bpy.context.scene.render.resolution_percentage = 50
bpy.context.scene.render.resolution_x = 1280
bpy.context.scene.render.resolution_y = 960
bpy.context.scene.cycles.samples = 20
assets_name = "Assets" # name of collection in master.blend
assets_collection = None
for collection in bpy.data.collections:
if collection.name == assets_name:
assets_collection = collection
if assets_collection is None:
print( "Error: %s collection not found in master" % assets_name )
# Save original filenames for output nodes
output_nodes = 
for node in bpy.context.scene.node_tree.nodes:
if ( node.type == "OUTPUT_FILE" ):
output_nodes.append( ( node, node.file_slots.path ) )
for target in assets_collection.objects:
# Show only the current object
for obj in assets_collection.objects:
obj.hide_render = obj.name != target.name
# Set output filename
for node, path in output_nodes:
node.file_slots.path = target.name + "_" + path
Master & test .blend files here: blender28-batch-render-test.zip
Additionally, I created a simple pipeline for rendering weapon silhouettes for the HUD. With a single click, it pulls in each weapon model, renders it from a side-view, then uses just the alpha to write a single-color transparent PNG.
And similar setup for pickup icons, with a subtle “glow” pass.
Took a moment to look at font rendering quality again. Finally managed to make SDF (signed distance field) rendering work properly. With SDF now directly supported by stb_truetype all I had to do was apply the right shader. The results look really nice.
After experimenting with Cocoa-based editors, I’ve gone back to IMGUI. The panels I had built with Cocoa were beautiful (and I love native Mac apps), but it was too much work.
Building functional interfaces is a lot faster with IMGUI.
Luckily enough I get to use IMGUI at my day job now and I’ve learned a lot since my last attempt to integrate it into the game.
Turns out the problem of IMGUI and my own UI elements fighting for input events can be solved quite easily. I just need to process things in the right order and honor the “IMGUI wants focus” flags.