Creating a WebGL and Wasm application with Claude Code
Over the past few months, I’ve been exploring how to develop software using generative AI. It’s a powerful, fascinating, and rapidly evolving field. Development agents like Claude Code are becoming remarkably capable if you know how to use them effectively. Everyone is still figuring out the best practices.
To put Claude Code to a practical test, I decided to create a new demo application for my RSMotion C++ library. In short, it calculates optimal navigation paths between two car positions and orientations on a 2D plane which is useful for game development and robotics applications.
The library itself is about six years old and still works fine, but the accompanying demo application had become extremely difficult to build and execute. When I recently tried to build it myself, I couldn’t even get it working!
I decided to scrap the entire demo and pursue a more modern approach: a web-based application that would visually demonstrate the library’s functionality. This would be cross-platform and, even better, reduce the execution barrier almost to zero.
The technical stack would involve Emscripten to compile C++ to WebAssembly (Wasm), JavaScript for the application logic, WebGL for visualization, and interop between the Wasm module and JavaScript application. The problem was that I’m not an expert in Emscripten, WebGL, or their interoperability. This seemed like the perfect use case for AI-assisted development!
The Development Process #
I started with some manual groundwork: creating directories, setting up CMake build and deploy scripts, and building a minimal JavaScript application to test Wasm module loading. This basic setup would compile the C++ simulation code to a Wasm module and load it in the web application but with no visualization or interop functionality yet.
Next, I asked Claude Code to generate a CLAUDE.md
using the /init
command and customized the instructions to my preferences.
With everything in place, I was ready to let the AI agent take over. My first prompt was straightforward:
Create the inital setup for a rendersystem in WebGL 2. Clear the background, setup an initial camera (zoom out a bit), and draw the checkerboard at the origin.
It worked flawlessly on the first attempt. Initially, it tried using OpenGL constructs not supported in WebGL, but quickly adapted to WebGL-compatible approaches. This alone saved me hours of setup work.
My next request was more complex:
The C++ example application currently has the simulation working but it requires visualuzation through the web application. Please think very hard about how to create the visualization for the example and create a plan. Take into consideraton the interop between the javascript and the Wasm module. Think about querying the state of the simulation from the javascript web application. Also think about how to use WebGL to visualizate the ground, car and path (line segments).
It achieved about 80% of the solution! I couldn’t see some elements moving initially, but that turned out to be an issue in the C++ code. After two more prompts, everything was working perfectly.
What made this particularly impressive was that the agent couldn’t actually see the results! It had to guess what would appear on screen since it was working with a WebGL context it couldn’t verify. An agent with visual feedback capabilities would be even more effective. In this case, I served as the human-in-the-loop, providing feedback on the visual output.
Fine-tuning the Details #
I noticed the 3D car model (a simple box) wasn’t following the path correctly due to alignment issues:
Change the car model such that the rear axle is at the origin of the model. I.e., the whole box must be translated half of the length in the direction of the z axis.
Perfect. Then I wanted something more visually appealing:
Can you create the geometry of a simple car instead of a box? I.e. a box with a smaller box on top?
Excellent results. Noteworthy to mention that Claude immediately started to use the word ‘cabin’ to define the ‘box on top’, it knew very well what I wanted even if I didn’t use the right terminology.
I realized I’d forgotten something important:
Ah yes I can now see it, can you have a different color for body and cabin?
Then I had an idea for showing the destination more clearly:
I want to render also the car as it would be at the finish. Change the example such that two cars are always visible: the driving car and the static car that is always located and orientated at the finish spot.
I grabbed a cup of coffee, and when I returned, it was done. One final touch:
Can you make the finished car rendering 50% transparent?
Nice. Maybe not the prettiest demo but good enough for now, wrap it up!
The Results #
The entire project took just a couple of hours, compared to the days it would have required if I’d tackled it solo. Honestly, I probably wouldn’t have attempted it at all without AI assistance, I simply don’t have days to dedicate to side projects like this.
This experience highlighted something important: AI-assisted development doesn’t just make developers more efficient—it enables exploration of projects that would otherwise remain undone. It opens up possibilities for side quests and experiments that time constraints would normally prohibit.