August 16, 2006

[Gaming] Why Managed Code Works For Games

Over the last few weeks, I've been trying to bring my development skills back up to where they should be. They've atrophied a bit over the last year and a half because I've been so focused on testing that I've had little time to devote to code.

During this time, I've been working my way back up with C++, Visual Basic .NET and a bit of C#, and the more coding I do, the more I remember why managed languages rock for game development. For this discussion, I'm going to focus on the .NET Framework, as it is what I am most familiar with.

Performance is the biggest thing on the minds of most game developers. A consistent, smooth framerate is important for immersion and the overall player experience.

In C++, most performance optimizations can be classified as either algorithmic optimizations, disk access optimizations or allocation optimizations. Nowadays, it's only when you're really pushing for that last 1% that you start worrying about the low-level assembly-level stuff.

For algorithmic optimizations, it's things like using A* instead of Djikstra for pathfinding, properly constructing your vertex and index buffers instead of sending everything off one polygon at a time, and utilizing the vertex and pixel shaders on the video card instead of handling everything on the CPU.

For disk access optimizations, you're looking at packing your files into "packed" files to reduce file system and security check overhead, precooking your content so that it's closer to what it would be in memory, and making sure that files that are related are close to each other and preferably in order on the media.

For allocation optimizations, you're looking at handling your large allocations early and keeping them around, potentially writing your own custom allocator, minimizing allocations per frame, etc. Allocations and deallocations are not only the biggest enemy of performance, but also the biggest cause of horrible memory leaks.

The nice thing is that for the most part, those are the same optimization tools that you'd use for writing anything using managed code.

You're still going to be performance-bound by your algorithms and file access is still going to be a major overhead. You'll be swapping memory leaks for another fear, though...Gen-2 collections. Of course, the allocation tracking tools available for the .NET Framework rock for finding and eliminating the stray allocations that can lead to a Gen-2.

Small, common allocations aren't feared as much in the managed world. The act of allocating memory in managed code is relatively cheap. (Consider it strongly-typed malloc()-lite.) Gen-0 collections happen often, and are rarely more expensive than a page fault. The biggest headache will be your first couple of Gen-1 collections while your long-term allocations (textures, vertex buffers, etc.) get promoted up to Gen-2. (Of course, anything over 85,000 bytes gets auto-allocated from the large object heap, so they're pretty much Gen-2 to begin with.)

A lot of people fret about the small amount of extra overhead involved in making the call to native code. Of course, most of the things you'd call native code for are already in place in the framework (just a matter of making the framework work for you). However, if you're calling native code that often, ask yourself this: Do I really have to be that chatty with the API? Am I batching my drawing, or just sending it poly-by-poly? Am I polling the XInput controllers multiple times per frame, or am I getting the info once and caching it for the frame? It's the same things you'd be looking for when optimizing native code.

So far, it sounds like things are pretty close to equivalent, but there are other benefits as well, most of which are focused on helping the developer. It's great to be able to get reliable stack traces because you don't have to worry about stack corruption, bad pointers, etc. You don't need to worry about buffer overflows, incorrect typecasts, goofy C++ function signature mismatches in libraries, and the list goes on.

Of course, the benefits have some drawbacks. If your object uses unmanaged resources (like file system handles, or graphics resources), you have to remember to implement and use the IDisposable interface, because unlike C++, you can't rely on the destructor for any object to be called the moment it goes out of scope. (It actually won't be called until the object is collected by the GC, but if you properly dispose of it and suppress the finalizer at the end of the dispose call, there's no performance drop in comparison to C++.) Strings require some forethought if they are going to be integral to your codebase because they can't be changed after they are created. Calling native API's is discouraged, but can be done. In addition, there are times where it is necessary (QueryPerformanceCounter and QueryPerformanceFrequency for example). (P/Invoke is your friend...)

Expect a brief ramp-up if you decide to move to managed code. However, the ramp-up will be worth it. After all, if the team at GarageGames can port their best-selling Xbox Live Arcade title to managed code in three weeks and have similar performance running unoptimized ported code on unoptimized beta libraries, chances are other teams will have similar experiences.

1 comment:

Michael Russell said...

Given that .NET Managed Code is all JIT'ted, the JIT compiler provides some benefits of its own, the .NET Framework also provides some benefits, and this was just the first article in a series, you'll see some more benefits coming down the pike.