← Engineering

What Breaks When You Put a 25-Year-Old RTS in a Browser Tab

April 15, 2026
OpenRA running in the browser — a Red Alert base with the sidebar visible, compiled to WebAssembly
TLDR: We ported OpenRA — a full C#/.NET real-time strategy engine with its own renderer, audio pipeline, Lua scripting VM, and networking protocol — to run in the browser, without an installer, from static files. Skirmish and campaign launch both work, saves persist. To get there, we had to build real browser-specific test infrastructure, which turned out to be as important as any of the technical fixes. The last open problem is a multiplayer fog rendering bug that's unlike everything else we solved, and that difference is worth explaining.

Two decades of desktop assumptions, now in a browser tab

OpenRA is an open-source re-implementation of the classic Westwood RTS games — Command & Conquer: Red Alert, Tiberian Dawn, and Dune 2000. The real OpenRA engine is now running in a browser tab via WebAssembly: the same C#/.NET codebase that's been evolving for 20 years, compiled to Wasm, booting from static files, running campaigns with full Lua-scripted missions, and persisting saves across reloads. No installer, no launcher — just a URL.

Getting there took longer than expected, and the reason it took longer is the same reason it was worth writing about. A desktop game engine that's been in active development for two decades has had time to accumulate assumptions about its environment: that it has a native filesystem, that networking means raw TCP, that the renderer speaks OpenGL, that scripting means a native Lua binary, that the audio subsystem can write to hardware. None of those are true in a browser, and the game never had any reason to surface them as assumptions — on desktop, they were just facts.

This is a story about finding those assumptions, one by one, and what happens when you find one you can't fully replace.

The infrastructure that made everything else possible

Before getting to any of the individual technical problems, there's something worth saying about how you find 20 years of hidden assumptions in the first place.

Early in the project, a browser port means manually reloading a tab, clicking around, and hoping. That approach has a ceiling. Once we started building proper browser-specific test infrastructure — harnesses covering static boot, shellmap startup, skirmish launch, mission discovery, audio runtime, save and reload, move-order synchronization, disconnect handling, and timed fog regression — the nature of the work changed. Bugs became reproducible, with shapes you could actually investigate.

By the time the harder problems arrived, we had structured diagnostics and automated regression checks rather than guesswork. That transition is what made the later phases tractable, and it's why the test infrastructure deserves to come first in this account rather than last.

The first assumption: the filesystem

Before anything else could happen, the game needed to boot reliably from static files. That sounds simple, but it wasn't.

The early browser build mixed browser-side asset staging with .NET-side file usage in a way that created races. The virtual filesystem (the layer OpenRA uses to abstract over its content files, mods, shaders, and settings) was being populated in parallel with the engine initializing against it. Sometimes things worked. Sometimes the engine reached for a file that hadn't been staged yet and hung silently, looking to the user like a dead tab.

The fix was straightforward once the problem was named: browser-side staging had to complete fully before engine initialization ran. Version file, shaders, settings, content, mods — all written to the virtual filesystem and validated before a single line of engine startup code executed. The build stopped behaving like a partially staged runtime and started behaving like a real static bundle.

It doesn't sound exciting, but every subsequent assumption was only reachable because this one got solved.

The second layer: rendering assumptions

Once the engine was booting deterministically, the next failures were rendering problems, and not subtle ones.

Menu text was rendering with letters vertically drifting and baselines inconsistent, in a pattern that looked like alternating caps. The browser's canvas text API returns glyph metrics differently than a native font rasterizer. The engine was reading bearings incorrectly, cropping glyph bitmaps to wrong bounds, and stacking the results on screen. Mouse clicks were also landing in the wrong place (sometimes well off from where the cursor was) because the browser's coordinate system for input events doesn't automatically match the canvas coordinate system once any scaling or fit-to-container behavior is involved. OpenRA was reading raw event coordinates as if the canvas was always full-window.

What these bugs have in common is that the desktop version had never needed to think about them. Native font rasterizers return expected metrics. Native input systems give you coordinates in the right space. The browser does neither automatically. Finding each one meant reading the browser's actual behavior rather than inheriting the desktop's.

The third layer: scripting

Campaign missions in OpenRA are driven by Lua scripts. The desktop build uses a native lua51-style binding. The browser has no native runtime.

The fix was switching the browser path to MoonSharp (a managed Lua implementation written in C#) behind an Eluant-compatible API shim. From the engine's perspective, the scripting interface looked the same. Under the hood, Lua was now running entirely in managed code that could compile to WebAssembly without any native dependencies.

This was a single phase with a clean before and after: campaigns either failed at script runtime startup, or they didn't. But it's worth noting because it shows the depth of what "porting to the browser" means for a mature engine — not just the renderer and the networking, but every native dependency, including the ones that look like minor implementation details until you're in a runtime where they don't exist.

The networking architecture

The multiplayer requirement was browser-to-browser play, no dedicated gameplay server, using the actual OpenRA game logic. The architectural question was how much of the existing network layer to keep.

The decision was to replace only the transport. OpenRA's game layer (its lobby, tick synchronization, and protocol) stays intact. What changes is the layer underneath: instead of raw TCP sockets, the browser build uses WebRTC data channels for the actual gameplay connection, with room discovery happening through a signaling server. The game engine doesn't see the difference between TCP and a WebRTC data channel — it just sees a connection.

This is harder to implement than rebuilding from scratch in some ways, because you're constrained by what the existing protocol assumes. But it's more defensible: you're not reimplementing 20 years of netcode, you're replacing one layer of it. Whether that layer holds up under the full range of real-world conditions is exactly what the last open problem is about.

The assumption that won't close

Every bug described so far had the same shape: a desktop assumption that broke in a new environment, an investigation that identified it, and a fix that replaced the broken assumption with browser-appropriate behavior. The fog of war rendering bug is different, and the difference is what makes it interesting.

During active multiplayer play, the shroud (the layer that hides terrain and units you haven't scouted) can visually corrupt. One player appears to see terrain they shouldn't — but it's not a clean map reveal. Tooltip text still reads "Unrevealed Terrain." Enemy bases don't appear. The underlying game state is correct while the visual output isn't.

The obvious culprits were ruled out: no reveal-map option accidentally enabled, no host/guest game-state ownership mismatch, no simple renderPlayer drift. What investigation confirmed is that this is a browser render-layer invalidation problem. The resource layer could leak ore deposits under hidden terrain. The WebGL compositing pipeline was staling or leaking visual content that the game logic had correctly marked as hidden.

On desktop, game state and render state are tightly coupled by design: if the engine marks something hidden, the renderer shows it hidden, with no layer in between that can disagree. In the browser, the WebGL pipeline has its own compositing and invalidation behavior sitting between the engine and the screen. The game can be right while the browser still shows something else. That gap doesn't exist on desktop, which is why two decades of development never produced a defense against it.

The automated fog regression tests pass at idle and under timed conditions (up to 300 seconds, in separate browsers and same-browser two-tab runs), which means the bug isn't simply "wait long enough." It requires an interaction trigger — camera movement, production sidebar activity, building placement, a focus change during play. Something specific is invalidating a render layer the game doesn't know about.

The earlier assumptions were hidden because the desktop had answered them before anyone had to ask. This one is harder: the desktop never had a compositing pipeline with its own cache invalidation logic between the game and the screen, so there was nothing to find. It's a browser-native failure mode, and closing it means understanding the browser's rendering behavior under interaction at a level of detail the rest of the port didn't require.

That's where this stands: the assumption archaeology got a long way, but the last assumption is the one the browser introduced rather than the one the desktop hid.

A note on the assets

There's a reason a second game, OpenHV, runs in parallel with the Red Alert build.

OpenRA's engine is open source. Red Alert's original content and assets are not. Using them under the same posture as the OpenRA project is a risk posture, not a guaranteed authorization, and hosting EA-derived assets on a commercial product domain is legally risky regardless of how the engine is licensed.

OpenHV is an entirely open-content standalone project built on the same engine. The browser port works for it too: boot, shellmap, skirmish, static export. For any path toward a publicly hosted product, it's the safer foundation.

The Red Alert build is the main technical achievement, and OpenHV is why it has somewhere to go.