MVVM for Game Tools

The MVVM (Model-View-ViewModel) software pattern is a very powerful architecture that can benefit game tool development, particularly in the area of usability. MVVM separates the UI from the underlying data through a kind of interface class, the view model. The view model handles all the manipulation in the model (the data), and serves the data to the view (the UI) in a way that the view can understand. Because of the added view model layer, the view can be more easily written with the end-user in mind, instead of in a way that best fits the data.

You would think that adding this middle layer would complicate the development process for the programmer, leading to a longer development time, but in fact, it allows the interface for the tool and the complex interactions behind the tool to be developed in parallel by the most appropriate persons. Splitting the development process in two parts, between an engineer and a designer, allows the engineer to focus on the hard programming tasks while the design expert works out the hard usability and design tasks. The two come together within the glue of the view model. This takes the time-consuming task of iterating on the UI with the feedback of the end-users off the shoulders of the programmer, and puts it onto someone in a role closer to that of the end-users, themselves.

Facilitating this relationship between tool designer and software engineer requires development tools that can bring these two worlds together. Fortunately, the most common framework for developing MVVM-based applications, WPF, has the tools to do so. WPF is a .NET UI library and a replacement for the older WinForms library. It uses a a declarative language called XAML, which is based on XML. Because of its declarative nature, XAML can be easily created with a graphical editor such as Expression Blend, which is well suited to designers, and works almost seamlessly with Visual Studio.

MVVM allows programmers and designers to come together in a way that they never have before, allowing them to shorten development time and increase tool quality by putting each task into the hands of the person best suited to it. Leveraging this power will give game developers the ability to create better interfaces for editing game data, and improve the development process overall.

Light Table is now on Kickstarter

Chris Granger‘s Light Table was originally a concept video for a dynamic language IDE that extends concepts by Bret Victor about how to make the coding experience more focused and integrated with the final product.

The project is now on Kickstarter, and its looking for $200k in funding to build an open source IDE for Clojure and Javascript. If they reach $300k they are going to build plug-in support for Python , too. Very exciting stuff!

Light Table – a new IDE from Chris Granger on Vimeo.

Using XML Schema for Tools

XML is the de facto standard for game data, at least for intermediate data. Just about every programming language commonly in use has robust libraries for reading, writing and manipulating XML data. There are also some interesting general purpose tools for XML that not many people know very well such as XSLT (which I’ve discussed previously) and XML schema.
XML schema is most often associated with error checking (AKA validation), and that is a very powerful feature of schema. XML schema acts as a data definition language for well-formed XML data, so, once your schema is defined, you can test your XML file against it, to make sure that thew data follows the definition. This is particularly useful in situations where the data is potentially hand edited (a big no-no), corruption of data can occur, or where the definition of the data is in flux.
In the case where data definition is changing while the data is being worked on (we’ve all been there), you could use a versioning scheme that included versioned schema files to automatically fix your data, removing data that was old, and adding new data with default values defined in the schema.
And that’s not all.
The second and possibly more interesting use for schema is to add metadata to types using the annotation field using the appinfo element. The appinfo can contain any well formed XML and can be read directly by your application. This can be useful in customizing your editor to display unique controls for each data element being edited. For instance, you might have something like this:

<xs:schema xmlns:xs="http://www.roboticarmsoftware.com">
<xs:complexType name ="Unit" abstract="false">
  <xs:annotation>
    <xs:appinfo>
      <attribute name="Name" editor="TextField"/>
      <attribute name="Type" editor="EnumList" values="Infantry,Artillery,Hovercraft"/>
      <attribute name="HP" editor="Slider" range="1,100"/>
    </xs:appinfo>
  </xs:annotation>
  <xs:complexContent>
    <xs:attribute name="Name" type="xs:string" default="None"/>
    <xs:attribute name="Type" type="UnitType" default="Infantry"/>
    <xs:attribute name="HP" type="xs:integer" default="0"/>
  </xs:complexContent>
 </xs:complexType>
</xs:schema>

In our code, we’d read the appinfo and initialize the various controls for each property of the Unit type as defined above, that way, instead of just showing generic property editing controls, we can truly customize it to fit the data being edited. This is just a simple example, but you can easily come up with much more interesting (and hopefully useful) uses of the concept. You can find more detailed info on the W3Schools and MSDN websites.

Some Code Organization Patterns

Lately I am settling into a new job over at Neversoft. There are some awesome folks over there, and I am really enjoying it so far. Along with starting a new job comes learning a completely different codebase. This can be especially arduous for tools folks since tools code typically sits atop a mountain of engine, pipeline, and foundation code.

In trying to wrap my head around an entirely new chunk of tech, I keep re-discovering patterns that make it easier to get your bearings on a lot of new code quickly. There are lots of these patterns that studios follow when organizing their code, and following these can make it easier to dive and and start getting work done (or just make getting work done in general). Some or all of these may be obvious to experienced engineers, but I figure it never hurts to reinforce best practices, and you never know when someone will have the total opposite opinion for really interesting reasons.

Maintain just a handful of high level solutions so its easy to gain grand perspective.

The lower the solution count in your project the better. Ideally they should all be in the top level folder of your code. The key here is to create awareness of the major chunks of technology in your project. I think most people agree that the bar should be low for any engineer to get in and look at tools, engine, or game code. The more you hide solutions within your code tree the more arcane knowledge is required to even know who the major players are in your codebase.

Direct all compiler output to a single folder.

Nothing hurts broad searches more than having large binary files mixed in with the source you are trying to search. It’s probably the reason why Visual Studio has preconfigured laundry lists of source code file filters in their Find in Files tool. If you redirect all your compiler output folders to its own root folder then broad searches gets orders of magnitude faster since it doesn’t have to wade through compiler data.

If your compiler output is directed to a separate dedicated folder then doing a clean build is just a simple matter of destroying the output folder and re-running your build. Explicit cleans are just slower, and its just easier to delete a folder when scripting things like build server operations.

Code generated via custom build steps counts as compiler output too! Add your output location an include path and #include generated code, even c/cpp files. Doing this keeps a very clear distinction between generated code and code which belongs in revision control (and hopefully you aren’t storing generated code in revision control!).

Keep 3rd party library code and solutions separate.

A big part of effectively searching through your codebase is being able to differentiate your code from external library code. Littering 3rd party libraries in with your own code can muddle search results.

Frequently its not necessary to clean build both 3rd party code and your project code, so having separate solutions can save time. It also makes performing search and replaces within solutions that only have your project code in them safer (you don’t want to search and replace within a 3rd party lib do you!?).

Install large 3rd party SDKs directly onto workstations.

Revision control isn’t the only software delivery mechanism on the planet. Nobody should be making changes within the CellSDK, DirectX SDK, or FBX SDK so they shouldn’t be checked into revision control. These packages tend to be very easy to script for unattended installation (msiexec). This makes it easy to write a simple SDK checkup script to make sure that any given client (even build servers) have the latest kit installed.

Most large SDKs have environment variables that make them easy to find on the system, and even if they don’t you can typically assume where it should be installed. If they are missing it’s a simple thing to track down and install t (even for junior or associate engineers). Also, it never hurts to add compile asserts to validate that the code is being built against the correct version of those libraries.

If you happen to develop on a system with a package manager, they are awesome for making it easy to pull down 3rd party libraries directly off the internet. Microsoft’s CoApp project aims to do just that on Windows.

Only check in binaries of what you cannot easily compile.

The less compiled binaries you check in the better your revision control will perform, and everyone you work with is served better when revision control works well. Source code is much quicker to transfer and store on servers and peers. Not checking in compiled binaries means less waiting for transfers, less locking for centralized servers, and less long term size creep for distributed repositories.

Checking in built versions of libraries will create a headache for yourself in the future when you want to deploy a new compiler or support a new architecture (which will require you to recompile using a bunch of crusty project files that haven’t been used in months or years). It’s always worth a little extra time when adding a new external library to take command over your build configuration management. Sometimes this can involve making your own project files instead of using ones that may be included with the library source code. High level build scripting tools like Premake, CMake, and boost::build are worth spending time to learn, and can make hand-creating IDE-specific projects seem archaic. If updating external libraries in your engine is easy you will do it more often, and hence reap the benefit of more frequent fixes and improvements you don’t have to do yourself.

This article was also posted to AltDevBlogADay.

Data Driven is Half The Battle

I was recently invited to do a talk at Game Forum Germany, and the talk I gave was called “Data Driven Is Half the Battle.” I’ve made the slides available on my website if you would like to take a look.

The purpose of the talk was to show that just making game systems data driven is not the end of the road to making your game configurable, especially when you want the rest of your team to be able to edit these configuration files. Formats like XML and JSON are awesome, but by design lack any context for the properties and values they control. This is good thing from a programmer’s perspective, since it means that we can define the meanings of properties and valid values, but a bad thing from the perspective of someone who has to edit those files. Either the system needs to be really well documented, you need to create a tool that ensures that people editing can only supply valid values.

Maintaining these tools can become a huge pain in the ass, though, especially when features or data modules are being added frequently.

My proposal to fix this was to use reflection, either custom coded in C++ or one offered by the language you’re using. This is the one I have the most experience with and the one I’m most comfortable using. Interestingly fellow Toolsmith Geoff Evans actually has an article in Game Developer this month about using reflection in Helium, which is worth checking out if you’re looking to implement this sort of behavior.

However, this does not mean this is the only solution, especially if you’re moving data between multiple systems and / or multiple languages. In this case, a data definition system, might be more worth your while, especially if you can just use the data definition to dynamically load the class as specified (this would be possible to do in dynamic or duck typed languages).

No matter what, the key takeaway of the talk was twofold: 1) Make it easier for people to modify data, and everyone will be happier, and 2) Make it easier for your programmers to do so, and they’ll do it more frequently, with fewer bugs., which also makes everyone happier.

Common Problems: Preserving Atomic Changes When Checking In Builds

One of the things I’d like the Toolsmiths to be is a place where we can discuss our common problems, and hopefully come up with common solutions. Toward that end, I’m starting a new series on the blog called “Common Problems”, and I’m kicking it off with something that I’ve seen as a common problem recently.

We all know the benefits of having continuous integration and / or nightly builds. What I’ve found to be problematic, though, is when distributing that build to other members of the team means checking the build into source control, specifically when it is checked in to the same directory that other team members use to do their work. This setup is beneficial in many ways. This directory, we’ll call it the “data” directory, is basically a snapshot of the project. Team members pull from that directory and it has the most recently compiled executable plus all configuration, data, and art files needed to run the game. They can then easily change anything in the directory, test, and commit. It’s quick easy and painless, for the most part.

Generally artists and designers only check out the “data” directory, make their changes, and check back in so that everyone can partake. If they’re good artists and designers, they make sure that their changes work before checking in, and everything they’ve worked on becomes an atomic commit in any modern source control system. Since their not editing the executable, these changes almost always remain atomic.

Coders, however, check out both the “data” and the “code” directories. They will frequently edit the code and the data to get something working, and, after testing, they will then check in both directories atomically. However here’s the problem: there is a period of time between when the coder checks in new code and when the build machine will check in changes to that code into the data directory. During this time there is a disconnect between the executable and what’s in the data directory. In the best case scenario this doesn’t affect the team in any significant manner. Worst case, the game will crash because expected data has changed or been removed. Again, best case here is that someone realizes this is just a disconnect in the data and waits for the next build. Worst case, an erroneous bug gets created that someone actually spends time trying to solve.

I’ve tried to come up with possible solutions for this, but only have half answers:

  1. Do not build continuously, and instead have programmers check in builds whenever they change the executable. This can be accomplished by setting the target directory to your data directory. The down side of this is that, on large teams, it would be a race to check in your executable before others. In addition, a careless coder could stomp out another’s executable changes. This would be hard, but not impossible.
  2. Hold checkins to the data directory that modify code until the build is complete, and then check them in. This can be problematic because if the same data changes while the build is working, the source control server will reject the change. Furthermore, coders that pull during this time will get the code, but not the data. This is also extremely hard to implement.

What are your solutions for this problem? Do you have this problem? Why or why not?

Premake 4.3

Industrious One has announced availability of next major release of its excellent build configuration tool, Premake. The announcement and download link is here. Premake is a BSD open source, lua based, cross-platform IDE project and Makefile generation tool.

Premake lets you define common settings on the solution level and add configuration-specific settings based on wildcards. For example, I can define WIN32 as a common preprocessor variable, but set UNICODE to be defined only for configurations whose name matches “*Unicode”. Premake can be a huge benefit to managing the combinatorial explosion of settings for build configuration (ASCII/Unicode, Debug/Release, Win32/x64).

Premake has support for generating PS3 and Xbox360 visual studio solutions, but version 4.3 is still missing a couple of things that game developers need to handle every scenario. These include generation of projects that need to call out to make, and projects with custom build steps (for shaders, assembly, and code-generating scripts). Support for this is planned for subsequent releases, and there are already some patches to evaluate. Premake itself is simple to download and build (its hosted on BitBucket). If you do decide to take the plunge and switch to Premake, you will find starkos (the project runner) to be very courteous and responsive.

If you deal with build configuration at your studio, you owe it to yourself to evaluate Premake. It has vastly simplified managing our builds at WMD.