This ongoing series delves more deeply into each of the “six reasons your game development tools suck” as argued in my very first post.
A lot of clutter in a tool’s user interface can be very confusing. When a user needs to scan the toolbar for a specific button to do something very routine, that’s time wasted. Going about this search my result in a context switch that causes the user to momentarily loose track of what he was doing beforehand, causing a further loss in productivity. Minimizing these effects should be considered when designing a tool’s interface and there are at least two environments where this interface bloat tends to occur.
The first is the tool built on top of another tool. Building a tool on top of a 3D package, like Max or Maya, for instance, leads massive clutter. The interface itself is already complex, and adding to it just creates more of a problem.
To get around this issue in Max or Maya you can edit a few scripts to remove some of the standard interface items that users of your tool will never use. If you’re creating tools on top of other packages, there may be customization options to remove elements there, as well.
The second case is the uber-tool environment, in which all tools (outside of commercial packages) are built inside the same interface. Creating UI and AI in the same interface may not make the most sense, after all.
You can tackle the uber-tool issue in several ways. Try creating custom views that specify which tools are available for each user group. This is especially easy if the tools are all built on top of a plug-in architecture — Simply install the correct set of plug-ins for each user. This also has the benefit of less memory overhead, and possibly a quicker load time. On the other hand, if it’s important for your organization to have a consistent interface for every user for the sake of collaboration, try creating different modes for each interface that are easy to move in and out of.
In general, you should probably only add the most commonly used items to a toolbar, and keep everything else just in the menu. This will reduce clutter and make it easy for users to do what they need to do quickly most of the time. Allowing more advanced users to customize the interface to their personal taste is also a good idea, as they’ll have a better idea of what is easiest for themselves, keeping the default interface as simple as possible.
Industrious One has announced availability of next major release of its excellent build configuration tool, Premake. The announcement and download link is here. Premake is a BSD open source, lua based, cross-platform IDE project and Makefile generation tool.
Premake lets you define common settings on the solution level and add configuration-specific settings based on wildcards. For example, I can define WIN32 as a common preprocessor variable, but set UNICODE to be defined only for configurations whose name matches “*Unicode”. Premake can be a huge benefit to managing the combinatorial explosion of settings for build configuration (ASCII/Unicode, Debug/Release, Win32/x64).
Premake has support for generating PS3 and Xbox360 visual studio solutions, but version 4.3 is still missing a couple of things that game developers need to handle every scenario. These include generation of projects that need to call out to make, and projects with custom build steps (for shaders, assembly, and code-generating scripts). Support for this is planned for subsequent releases, and there are already some patches to evaluate. Premake itself is simple to download and build (its hosted on BitBucket). If you do decide to take the plunge and switch to Premake, you will find starkos (the project runner) to be very courteous and responsive.
If you deal with build configuration at your studio, you owe it to yourself to evaluate Premake. It has vastly simplified managing our builds at WMD.
One open source project I have been keeping an eye on is CoApp. Microsoft is currently paying a Garrett Serack to develop an open binary and source package management platform for Windows. The goal is provide the ease of use and flexibility of linux-style package management on the Windows platform. This is exactly what Microsoft needs to do to keep its operating system competitive in the current climate. Anyone that has developed or broadly deployed an open source application on windows knows the pain that can be avoided if this project succeeds.
A while back, after a talk I gave at our local IGDA chapter meeting, I got an email from a recruiter at a local game development company. He was looking to fill a position for a tools engineer and wanted to know what he needed to look for. I never got back to him (sorry!) mostly because I didn’t know what to tell him. What makes a good tools engineer? More importantly, what’s going to make someone want to take a position in tools development over a position in, say, game play programming, or AI programming, or graphics programming? Is there a distinct difference?
I really wasn’t sure of the answers to these questions until recently, and it made me think a lot about… well… you’ll see.
The Fundamental Trait
During my thinking, I think I identified the fundamental trait that makes a good tools programmer, or at least a programmer that would rather be in tools than any other field of game development: the desire to fix things.
This is different from most programmers who enter the game industry. Most game programmers want to create things. They want to point at a portion of a game and say “You see that? I did that. I made that happen.” As tools programmers, we can’t really say that. I worked on Oblivion and Fallout 3, on the game team, but there’s nothing I can specifically point to in the game that I really created. The closest I can get to is that I can point at the models and say “You see that model? I made the tool that made that model more memory efficient on the 360.”
Game programmers want to create, and, in my experience, when you talk to people both inside and outside the industry and say you’re a game programmer, the first thing they’ll ask is “So what did you do on the game?” expecting that you’ll be able to point to something specific. Gameplay programmers can say “I made that system work.” AI programmers can say “I made that person move.” Graphics programmers can say “I made that thing render and look good.”
Tools programmers can really only point at the team and say “I made that work better.” And honestly that’s how we want it. We take joy in oiling the machine.
The problem with the fundamental trait is that it’s not the trait that makes people enter the game industry. A programmer that takes joy in oiling the machine can do so in almost any industry, and, unfortunately, the game industry offers some of the worst hours, worst pay, and worst return on emotional investment of any programming industry. So, the only people you’re going to get looking to be tools programmers are going to be those that are attracted to the game industry in the first place, and the problems that it stands to solve. In my experience, this means creative people.
The Fundamental Dichotomy
Because you’re working with creative people, game industry tools programmers not only want to oil the machine, they also want to take part in its construction. Like most good programmers, good tools programmers want to constantly try new things, learn new technologies, and generally broaden their knowledge and hopefully find a way to improve your tool chain as a result.
The dichotomy here is that, at some point, you need to stop these creative people from being creative, and earlier than you would the rest of the game team. They need to stabilize their tools for use by the team, and therefore they enter support for code a lot sooner than the rest of the team, and support is the one thing creative individuals dread. It means the end of creative solutions, and the start of debugging drudgery. Keeping a balance of both is tough, but necessary, to keep the best tools programmers on your team.
Achieving the Balance
In my opinion, there are a few things you can do to help keep this fundamental balance between oiling the machine and constructing the machine. It boils down to the fundamental roles of your tools team during the different phases of a game’s production.
During pre-production, give your tools programmers some time to be in full construction mode. Allow them to try new things, new pipelines, new technologies. Let them look into converting the build system to Erlang, rewrite your Win32 tool in C#, or improve the interface on many tools. This is a prototyping phase, and should be labeled as such. Things made during this time do not necessarily need to be working, just prove that something will be more efficient than what is already available.
During code production, you should select the prototypes that make the most sense to implement and implement them. Here, your tools programmers are working very closely with the other teams to make sure all of their needs are going to be handled. They’re constructing the tools and letting out creative energy, but the tools now have clients (the team) and the client’s needs have to be met.
As the game enters production, the tools team needs to transition to support and minor improvements. At this point you’re mostly looking at bug fixes, but all of your tools programmers should be researching how well tools are doing “in the field” and look for ways to improve interfaces and save other team members time, either now with minor changes or during the next phase. When code lock down occurs, this should include tools, and the team has entered support mode.
However, once code lockdown occurs, the tools programmers, provided they have time, should be already entering pre-production on their new tools. Now is the best time. The problems of their technologies are fresh in their mind, and if they can fix them now, when more of the core team moves on to pre-production they can start being productive immediately with new tools.
This cycle helps keep your tools programmers creative, and helps them really point to the whole process and say “I made that work better,” which is exactly what they want.
What do others think?
Developing in-house game tools presents a myriad of debugging issues. You can’t always nail down bugs to reproducible steps (if you even have QA resources to concentrate on that). Frequently content creators will complain about rare issues that force them to reboot the tools or use bizarre workarounds then things go wrong. Remote debugging works in some of these scenarios, but is mainly useful for debugging crash bugs. Errant “drag and drop” or “click and drag” problems require sitting at the machine to properly deal with.
In these cases its handy to be able to deploy a debugger onto the user’s machine so you can dive in and see where your code is going wrong. To be successful at this you need a couple of components: the debug symbols from the compile, the source code, and a debugger.
On Windows the debugging symbols are separate files from the executables. PDB files contains the information debuggers need to map addresses of code and data in a running tool to the source code counterparts. In Visual Studio, PDBs are only generated in the Debug configuration by default, so assuming you distribute something like a Release build to your users you will need to turn on PDB generation in that configuration. Its under Linker… Debugging… Generate Debugging Info. Set it to Yes (/DEBUG). When you prepare and publish your tool set, make sure to include these PDBs with the executables (EXE and DLL files).
PDBs can get quite large, so it may be a good idea to not always pull down PDB files when users get the latest tools. Insomniac’s tools retrieval script has some command line flags to pull down PDBs and code only when we know we want to debug something on a user’s machine. Using -pdb will get the executables and PDBs, and -code will get the executables, PDBs, and source (all from a label populated when the tools executables were checked in).
Once you have the PDB and code on the machine you just need a Debugger to dig in with. On Windows you have a choice: Visual C++ Express Editions or WinDBG (from Debugging Tools for Windows). Both are free to install so you aren’t bending any license agreements here. Visual C++ should work pretty much like you expect on your development box, but can take a while to install and patch to the latest service pack. WinDBG on the other hand is very quick to install, but takes a little getting used to. Typically you must show the UI you want to use (Callstack, Memory, etc…), as well as potentially manually setting the PDB search path (from File… Symbol File Path). It’s a very different experience but its so quick to deploy it may be worth checking out.
Many of the available source control solutions out there are great if you are a programmer. Both Subversion and Perforce adequately handle the storing of assets, but neither is very friendly to creative types. How often do “bad checkins” happen because some new and obscure file created on the user’s machine didn’t get added? Or maybe the user didn’t get latest, merge the data, build the game and test it one last time before checking everything in.
Team sizes are increasing. So are the assets, themselves. The more users stressing the system, the more fragile it becomes.
You could generate previews of assets and view them right in the Alienbrain interface. It was a very slick feature and a selling point of the software. Finally, a user could see a preview of a model or texture (and many other asset types) without doing a get and opening the files in Maya or Photoshop, etc. That’s a real time-saver if you don’t remember the filename that was used for a specific asset. You have the chance to browse all the assets of that type and find the one you want pretty easily.
Like I said, though, NxN had its share of troubles. Still, I believe we can do better than the source control status-quo. I imagine an asset database solution that integrates with every asset generating tool, as well as the build process, generates a preview for each asset (even if it’s a bitmap that says “Preview Not Available”), and is searchable by its meta-data, including tags, creator, last modified, and so on.
The classic view of assets as a collection of files inside of folders, with users having to know exactly what files need to be checked in and out of source control when changes are made seems a little antiquated. Instead of searching through folders ten layers deep, how about using a tag cloud to find assets instead?
I imagine being able to open a web-based interface, searching for an animated character from an old project and clicking a button to copy it to a new project, including all of it’s vertex, texture and animation data and using it as the starting point for a brand new character, or maybe just as a placeholder until a new character is created. How many walk cycles does one studio need to recreate every time a new project is started, anyway? Why not take something you have and modify it to fit a new character in a completely different game?
I really beieve that asset databases are the wave of the future for game development. When the Xbox360 and PS3 came along, team sizes doubled, and assets got bigger and more complex. What’ll happen next time there’s a hardware revolution? We need to streamline the way we manage assets, or else, it’s going to bite us in the ass… even more.
In response to Dan’s post on when to rewrite vs. refactor existing tools, I wanted to point out what I felt was a key section:
Now comes the real decision point though. Does a rewrite make sense for the current project or should it be put off for a later time? If you’re in beta, rewriting a tool now isn’t going to help you get your game done. Consider how long a rewrite will take in man-days and calendar days. If you can get the new and improved tool into the hands of your developers fast enough to save them more time than it took to develop it, then I say, go ahead.
The key point here, is the suggestion that you “do the math’ on the tool: figure out how much time it will take to rewriting versus refactor, and balance that against the time saved by the number of developers that use the tool.
But doing the math should be a key concept when you’re trying to figure out anything tools related, including trying to convince higher ups that you really need a dedicated tools team or process team. What you need to take to them is real data that shows that you save more money with a tools team, or a tools refactor, than without one. So the question is, how do you accomplish this?
To answer this question, you need to know three pieces of data:
- How many developers use your tool?
- How much time does each developer waste because of poor design or poor implementation, OR how much time would be saved if a new tool was implemented?
- How much does each developer cost?
Number 1 and number 3 are easy to know. Just take a quick head count, and then compare their levels to the average salary for their field and the experience level using the published data from the Game Developer Salary Survey. Estimates here are usually fine. Using averages across the board (about $45k per year per developer, which comes up to about $22 an hour) here’s the numbers you’re going to come up with.
/developer / day
|Cost / day||Cost / year|
You’ll notice that at about 4 developers loosing 2 hours per day, you’ve basically paid for another developer. Even if you have 10 developers loosing 30 minutes per day, you can afford an intern to fix that problem.
With that said, hours lost per day, or hours of productivity gained is always going to be a best guess, and if you’re trying to sell this concept to higher ups, you’re going to have to make sure that you get that number right, or can somehow convince them that you got the number right. Now, the best way to do this is by having your tools gather metrics concerning how often they crash, time between key actions, build times, cook times, and turnaround times, but that only helps if you already have a team and are just looking to expand it. Otherwise, you have to rely on hearsay, but here are some techniques that should help:
- Ask other developers how much time they think they lose on a given day because of bad tool design or performance and average those numbers. Ask for comments about how tools could improve.
- Show time lost from other developers who are only spending half of their time (or less) working on tools. If you have a bug tracker, you can use those numbers to show amount of time spent on tools bugs. Combine these with well known metrics concerning hours lost in task switching to show real cost for these support requests.
- Show an unfilled developer need. If you hear people having trouble with a specific issue, ask them how much time they think they could save (on average) if a tool were made to help them. Show that it would cost less to hire a tools developer than to leave the problem unfixed.
Of course, once you’ve convinced higher ups to create a tools team, don’t stop there. Show them it was worth it. Too many people stop once they have what they want and don’t show the real and tangible benefits. These are not always obvious, especially to people who aren’t in the “pits” (meaning doing actual development), especially when some developers may not be vocal about their increased productivity, only their frustrations with a new tool. Show the amount of productivity gained, and amount of money saved. That will help prove to you and your boss the value that a tools team can bring.
I got an email recently asking for my advice on bug fixing vs. completly rewriting a broken tool. The email described the complexity of the tool in question being caused by the addition of new features on top of an already shakey starting point.
This sort of problem always comes down to time and money. The perception among management may be that this is going to waste time. After all, why rewrite something that seems to work fine, and if there are issues, isn’t it easier (and cheaper) to fix a few bugs than to write something from scratch?
Well, that may be true, but not necessarily. After all, buggy tools waste the time of everyone using them. If ten people waste just ten minutes per day, over the course of a single project that lasts a year, then you’ve lost over 10 weeks worth of work during that project. The actual amount of time may be much greater of course. I’ve worked in some studios where tools were so slow and buggy, it wasn’t uncommon for individuals to lose several hours in a single day.
I knew a programmer who would write almost every piece of code twice. He would completely scrap the first implementation in favor of his second try. The first was basically a learning experience, and once he figured out how to solve the problem, he could do it much more cleanly on the second go.
Rewriting a better tool may be much faster than the initial implementation. The team has learned from their mistakes and may have a much clearer vision for how the tool should be designed and implemented. There may also be some re-usable code, so not everything needs to be wasted; individual modules may be able to be salvaged.
Now comes the real decision point though. Does a rewrite make sense for the current project or should it be put off for a later time? If you’re in beta, rewriting a tool now isn’t going to help you get your game done. Consider how long a rewrite will take in man-days and calender days. If you can get the new and improved tool into the hands of your developers fast enough to save them more time than it took to develop it, then I say, go ahead.
One of the key issues in game tools development is how to improve asset turnaround time; how long is it between when an artist, programmer, writer, level designer, sound designer, or even an executive makes a change before the results can be seen in game or at least in engine. More importantly, how many other people will be affected by said change? The goal in any organization should be to make asset turnaround times as short as possible, and allow developers to make and test changes in isolation before shipping them out to the rest of the team.
There are a lot of approaches to this problem, but I’m going to narrow down the solutions to three that tend to be more efficient and should be used when developing a mature tools pipeline: Using in game editors as opposed to stand alone tools, implementing dynamic resource loading and unloading (through something like a developer console), and improved communication between game and stand alone tools.
Right now, I’m going to focus on the third possibility. The use of a game-embedded editor versus a standalone tool set is an ongoing argument in the tools community, and each side has its positive and negatives, but regardless of which way you go, some of your tools are not going to be game-embedded, and it is important that any “stand-alone” tools be able to communicate with your game. By creating even a simple a communication library, you’ll be able to issue commands to the game remotely, grab and analyze information without using game resources, and smartly organize, load and save diagnostics information, which might otherwise create large amounts of special case code in your game. By creating a slightly more complicated communication system, you can dynamically run scripts, save and load resources, and even set up a system that communicates changes in seconds to running games. Talk about turnaround time.
The key to creating a good communications library is understanding the limitations of each console, and when the console (or running game) can initiate communications with a PC, and visa versa. For things other than debug output (the topic of another article), you can assume that a running tool can communicate with a running game, but not the other way around. This means that the tool must initiate the communication before the console can send the necessary information back. In addition, most communication libraries perform this communication in a background thread and, if they don’t you should design them to do so. The last thing to keep in mind is that some commands may require a lot of data be sent back and forth from the tool and the game, and it is advantageous to split these commands into multiple sends of packet data, both from the tool and back from the game. A well defined command system will be able to specify just how much data will be sent, and how many packets it intends to split the data across.
So how do we go about doing this? First, consult your console’s documentation on communication. For PC, your best bet is to used named pipes. From there, the diagram at left offers a very high view of things, using command factories to create defined commands and issue responses. Here’s the basic rundown.
- Have your game open a well known named pipe, either public (if you want to communicate across PCs) or private (if you don’t). The game can then sit in a wait state on the pipe, looking for commands from your tool. Remember, this is in a separate thread, so having it in a wait state shouldn’t impact your game.
- Have your tool connect to the same named pipe, and issue a command string and parameters.
- Have the game, on receiving input, look up the command in a command map. This should point to either a command factory class or command factory method (I prefer the later for memory reasons, and a class is usually overkill). The factory should return a class that inherits from a base command.
- Run the returned command with parameters. The command should always generate some sort of simple response, be as simple as Succeeded / Failed or as complex as Need More Data, Ready To Send Data, or Ready To Initiate Communication.
- Send this response back to your tool, which should display the result to the user.
From here, the amount and type of communication is up to you, though this can become very complicated very quickly, as you’re essentially creating your own network protocol. However, there are a few things you should keep in mind. First, as I said before, you’ll want to design your protocol to be able to push multiple packets of information, usually of fixed size. This will dramatically reduce your memory requirements game side and will improve response on your tool side, as you’ll be able to offer more information to your users faster than if you were waiting for one large response. Second, develop a system for communicating with persistent items, such as pieces of debug information or your AI. This way you don’t have to go searching for the AI or object you’re watching or manipulating on every command, it will just always be there.