New debug feature added to the game: recording inputs during gameplay and playing them back later.
This helps me reproduce a given scenario with a single button press. I can also do a Quake-style “timedemo” recording and use it as a performance benchmark.
To make replay work properly, I had to set a constant seed in my random number generator and make sure all entities were using it. The storage format is not optimal. Recording a long session would probably use more disk space than necessary. But we’re still talking a few kilobytes per minute, so it’s fine.
I upgraded my home machine to macOS Catalina. Big mistake.
Zero interesting new features
Devastro 2 OpenGL renderer broken
Left 4 Dead unplayable (oh no!)
My OpenGL window went black. Was I using OpenGL wrong? How? On my quest for the answer, I ended up rewriting significant parts of the rendering core, updated all my shaders and created a new SDL port. It took several nights to get things working again.
The good thing is that switching to SDL was one of my mid-term goals anyway. I’m going to need it on Windows. To complete the port, I added SDL event handling and audio backend.
It was nice to get rid of the Cocoa boilerplate. One thing I missed in the SDL API was a function to set a fixed aspect ratio for my window. I had to do it like this:
Fixing Left 4 Dead was easy. Add a new APFS volume, install Mojave, install Steam, move data files over, tell Steam to use them → no need to re-download everything. The nice thing about APFS volumes is that they share space, so I don’t have to think about how big each partition should be.
I’ve been neglecting the level editor a little bit. It didn’t get fully updated with IMGUI, mainly because it was a mess. It had gone through the entity system development with only quick updates here and there and it showed.
Last night I decided it was finally time to dig in. After an hour or two of cleanup and refactoring, things started looking much better. I wrote new entity & variant selection lists and fixed a few bugs related to Box2D. Overall line count went down, which is always a good sign. So much for custom UI components!
This was a big one. I switched to Box2D for collision detection & physics.
My old system was nothing fancy: axis-aligned bounding boxes, a “sliding” mechanism for collisions, sector based space subdivision. Worked well for Superforce. However there’s a lot more happening in Devastro 2. More bullets, more particles, more enemies, more complicated level design, player motion needs to be fluid around obstacles etc. Box2D seems like the right tool for that job.
First I had to rip out large amounts of old code both in the game and the editor. Everything was completely broken for over a week. Fortunately the Box2D integration went well and the game is now back up and running.
The Box2D API is nice and there’s plenty of documentation & example code.
To improve rendering performance the game builds a texture atlas from all the images. This process is quick enough to be done at load time. I wanted to have a clear idea about the atlas layout. Xcode does have some support for viewing OpenGL textures but it’s been crashing on me almost every time I used that feature. So I did what I usually do when a 3rd party tool fails me: I built my own inspection tool to look at the OpenGL textures directly in-game.
Even in its initial simple form it has already helped fix a few issues.
I’m planning to extend it with zoom & pan and a way to track usage of the individual images in the atlas.
As I keep adding assets to the game, the load time keeps increasing. Who would have thought? A 5 second startup may not sound like much but it quickly adds up during development.
One way to fix it would be to load assets on demand. This might cause stuttering, but that’s OK for development. However, improving the load time would benefit the final game as well. Let’s do that!
I added a job system that allows the preload to happen on multiple threads. Load time: 1.5 seconds. Nice! Now that I’ve “jobified” the tasks that need to run, further improvements can be made with relatively little effort.
A brand new “single-header” library for decoding MPEG1 video, by Dominic Szablewski. Simple API, easy to integrate. It’s great!
In fact, I spent more time looking for my .blend source file than writing code to use the library. In Blender, I switched render output from image to video and set the codec to MPEG1. It all worked the first time I ran the game.
Built a simple particle editor. I had been using Particle Designer previously, which is a nice app but its PEX format has some limitations I didn’t like. So I used PEX as a starting point to build my own editor and will extend it later.
The new Blender 2.8 is great. Unfortunately it broke my batch render script.
Fixing the script and the rendering setup were quite a challenge due to the major UI changes in 2.8, but eventually I figured it out. Even learned a few new tricks along the way. How did I manage to write the original script without the built-in scripting console? So helpful!
My Blender export pipeline looks like this:
master.blend file with fixed camera & lighting setup
extra .blend file for each asset
link assets into master
for each asset, render object & shadow into separate PNG
First draft of the new script for 2.8:
# Requires Blender 2.8
# /Applications/blender.app/Contents/MacOS/Blender -b -P render28.py
# Node setup: see master.blend -> Compositing
# Render menu -> Film -> transparent
# Shadow catcher plane -> Object menu -> Visibility -> Shadow catcher
# Adding more objects: File -> Link... -> "Collection" from ext. file -> Move to Assets collection in master
basepath = os.path.normpath(os.path.dirname(os.path.abspath(__file__)))
master_blend = "blender28-batch-render-test-master.blend"
src = os.path.join(basepath, master_blend)
bpy.context.scene.render.resolution_percentage = 50
bpy.context.scene.render.resolution_x = 1280
bpy.context.scene.render.resolution_y = 960
bpy.context.scene.cycles.samples = 20
assets_name = "Assets" # name of collection in master.blend
assets_collection = None
for collection in bpy.data.collections:
if collection.name == assets_name:
assets_collection = collection
if assets_collection is None:
print( "Error: %s collection not found in master" % assets_name )
# Save original filenames for output nodes
output_nodes = 
for node in bpy.context.scene.node_tree.nodes:
if ( node.type == "OUTPUT_FILE" ):
output_nodes.append( ( node, node.file_slots.path ) )
for target in assets_collection.objects:
# Show only the current object
for obj in assets_collection.objects:
obj.hide_render = obj.name != target.name
# Set output filename
for node, path in output_nodes:
node.file_slots.path = target.name + "_" + path
Additionally, I created a simple pipeline for rendering weapon silhouettes for the HUD. With a single click, it pulls in each weapon model, renders it from a side-view, then uses just the alpha to write a single-color transparent PNG.
And similar setup for pickup icons, with a subtle “glow” pass.