I finally finished a game. I don't mean I played a game and completed it, I rarely do that, I get so carried away looking at the scenery and lighting. No - I finished writing a game.
I reckon I've been trying to do this for a long time now, my early attempts being on the ZX Spectrum and then DOS and Windows PCs.
My most recent attempts stemmed from watching Handmade Hero - Casey Muratori's streaming of his game-development from scratch using no pre-written library code. It's been running for several years now and I watch them all.
He develops in C - a low-level language that I've been able to read for a long time but have rarely written in. I don't find it very readable - especially the all-important data definition (modern C99 does help with some of the problems). It's barely a step-up from machine language, though still better than its successor, C++, which I've always had the good sense to stay away from. Much of making C useful is getting it to talk to the hardware it runs on and that means interfacing with Windows or other systems to get anything interesting done, such as drawing a triangle, playing a sound or reading a keyboard. That's where Handmade Hero comes in. It explains how to do that and later how to move code onto the graphics-processor (GPU) using variants of C that can update portions of a large display in parallel.
I think games programmers are the best programmers. They have to handle large amounts of data on a range of hardware to produce immersive, multimedia experiences and there's little room for errors or hand-waving. Games usually update millions of pixels sixty times every second. The whole industry advances every few months, with paradigm shifts every few years (16-bit, graphics cards, 3D, 32-bit, internet, mobile, 64-bit, ray-tracing hardware, new console generations, VR etc). The number of layers and interacting parts is huge yet you can't be ignorant of the low-level stuff. You need a wide-ranging and deep understanding of algorithms, physics, light, sound, networking, memory, disk, CPUs and GPUs. And crucially, on top of all that, a game has to be fun.
I initially built an early handmade-hero style prototype in C, without a graphics library, to get familiar with building and rasterising 3D objects using just the CPU. It was painful.
I then managed to develop a bare-bones project to draw 3D shapes with a GPU and build it in Windows and Linux. The amount of knowledge involved is extraordinary. Simply opening a window in a cross-platform way was a project in itself. There are so many ways to fail or get side-tracked: which platforms should I support; which GPU API (OpenGL, DirectX, Metal, Vulkan); 2D, isometric, or 3D; which GUI library and build-systems to make abstracting a user interface bearable (GLFW/GLUT/GLEW, SDL, imgui etc); which language (C, C++, Zig, Nim, Odin, Lazarus, Rust, Go, Lua, etc). For anything like a modern game you also need to understand linear algebra, calculus, matrices, quaternions, sound encoding, GPU shaders, input devices, ray tracing, asset development tools, and the huge variety of implementations of all these things, and much more besides.
Not surprisingly, many people choose to use a game engine or framework to avoid having to implement everything, and to avoid deciding which combination of technology to use. Each of these has strengths and weaknesses and each comes with a different sized learning/maintenance treadmill. I've looked at Unity, GameMaker, ThreeJS, Godot, Ebiten and others but for me they take away a large part of the reason to make a game in the first place.
After building a starting point in C with sqlite and OpenGL and imgui all hooked up, I decided I wasn't comfortable with C as a foundation for my game data, or as a productive base for me in general. So, probably as a way of putting off the game-related decisions, I built the platform again using other languages - each time trying to find one that felt right.
I had a couple of false starts with Python, my language of choice because it reads like pseudo-code and has excellent support for data definition built-in. Pushing the speed-sensitive grunt-work down to the GPU using OpenGL wrappers seemed to give the best of both worlds but it turned out that the wrappers only supported older OpenGL versions, and I now think such wrappers get in the way.
I rebuilt the prototype in Rust. Rust solves a lot of things I don't like about C in some novel ways but after a fair bit of use (which I enjoyed) I found the readability a problem. The
mut and the
& and the declaration vs the usage ended up being no clearer than C to me, once I'd left it alone for a while and come back to it. I think it's the syntax:
mut s = String and
&mut String, for example, made sense while I was writing it but didn't stick for me. Maybe I just need to use it for longer.
I ended up with lots of enums everywhere which seemed like a good use of the Rust feature but made things more indirect and complicated than they needed to be. The module/namespacing sounded simple on paper but I found it very confusing to use and ended up cargo-culting it to make it work which I take to be a very bad sign.
Also, the language seems to be changing every few years. In itself not a bad thing, but I get the impression it's changing by committee which I think is generally worse than one or two clever, benevolent dictators in charge of a language. The online documentation and resources are muddied with the changes which is a problem when trying to learn the latest version while still reading and importing code that spans earlier versions. Do I use
I then started a 2D prototype in Go. Like Rust, it also solves a lot of the issues with C and so far I think it is a much more readable language. It's simpler, has some superb, well-considered design decisions and doesn't change. If the garbage collection is ok for 60 frames/second then I think it could be the low-level language for me. By now it's hopefully far enough out of Google's hands to prevent them dropping it on a whim. Reader, I'm still bitter.
By this point I'm realising that making a game is getting further away, not nearer. Having a good idea for a game seems to be the tricky bit but, even with a good idea, it's converting it into something playable that's the real challenge.
I see an announcement for the Playdate which looks interesting. It uses another language, Lua, which I think looks like Python and so I try out yet another framework, Löve2D. I like it. It gets out of the way enough but provides just the right calls to let you put something on the screen quickly. I start building a game and get quite far. The limitations of the Playdate rule out lots of features and reduce the number of things to think about. This helps a lot. But the Covid pandemic postpones the Playdate launch and so I lose a bit of momentum.
Then (via Lua I think) I discover Pico-8 a fantasy console, which means no actual hardware ever existed, but it was designed as if it were a small console. It uses Lua and has lots of constraints: 128x128 pixels, 16 colours, 4 channels of sound, 32kB cartridges etc. These limits are what makes building a game so easy. All the difficult, but ultimately non-game related, decisions are removed. I've heard of constraints being useful in other areas, such as painting, and they really work here too. Without limitations the blank canvas is infinite - especially on a modern computer. Add a bunch of seemingly petty constraints and suddenly there are things to push against to prove that something can be done.
Pico-8 has art, code and sound editors built in which means getting started is simple and it also takes care of building and deploying a game across a number of platforms. You can spend a few hours and have something playable in that time. And it runs comfortably on a Raspberry Pi. It feels less like using a game engine or framework and more like you're developing on a console at a fairly low-level with a language that's on your side. Instead of a large, blank canvas you have a handy sketchpad which gives you the freedom to experiment and not care if a quick attempt fails: just throw it away and start another.
Now all I needed was a way to remove the last excuses for procrastination - the ultimate constraint: a deadline. Game jams provide this. A group of people decide to make games given an artificial deadline and it helps them to focus. I found a game jam just before Christmas: the Toy Box Game Jam 2 (it ran last year). It provided not only the 2 week deadline I knew I needed but also a bunch of assets (tiles, animations, sounds effects and music) so that I had no way to get side-tracked (by learning Blender for example, ahem). The assets even helped to suggest the type of game. The only thing left to do was to write that game.
So I got my head down and wrote my game. In only a few, satisfying hours I went from an initial idea about game motion through to having an animated character performing those motions which then suggested the next steps. Such quick proofs-of-concept are vital for building new software: they prevent over-thinking and expensive dead-ends and the quick feedback keeps up the momentum and so creates a cycle of new ideas. A few days of coding and testing and the game was done. The public deadline meant I added game-over screens and sound effects and completed things rather than just leaving those important pieces undone.
The game is here:
I'm pleased with my game. So I'm not a games programmer, but I am now a game programmer. It's not the 3D game and hand-built engine with ground-breaking lighting effects that I imagined I'd write but I think skipping over the engine-writing phase was the only way I was going to get something finished. Starting a game is easy; finishing one is difficult. There are so many things to get right even on such a simple game. I suppose it's like writing anything: it needs a beginning a middle and an end, and getting to the end needs a surprising amount of focus and determination. For every finished game, how many must be left in varying states of incompleteness? I have so much more respect for people who do manage to finish one.