Posted on 24 August 2009 by Dan Goodman

This ongoing series delves more deeply into each of the “six reasons your game development tools suck” as argued in my very first post.

Many game companies struggle with delivering tools quickly and cheaply.  Money is always an issue wherever you go.  After all, the bottom line is what keeps a company afloat and it’s employees employed.  No one wants their company to fail, to lose their jobs, or to lay off their workers.

Game companies are in an especially difficult position.  Attempting to balance a workforce spread over multiple disciplines — art, design, programming and production — is hard enough, but when you consider that those disciplines have their own specialties within each one, the task is even more difficult.

The obvious solution is to cut corners wherever possible, and that oftentimes falls squarely on the shoulders of  the tools team.  Why?  Because most game companies don’t make money selling tools.  Tools programmers serve in a support role, and therefore (in the minds of most game execs) are less valuable than those working directly on the games.

Tools teams very rarely get the full support of management, and game teams can’t be stalled waiting for tools to be completed.  The unfortunate sentiment among those in power is that there’s no time for tool design.  Get it done and get it done now.

There is no time for design, so the thinking seems to go, but what does that really mean?  Does that mean that the programmer implementing the tool charge blindly into development without thinking about how the tool needs to function?  Of course not. 

The programmer has a vague idea of what to do, and without ever writing it down or validating his thoughts with the end users in any formal way, begins to implement the design from his own mind.   He still thinks about it a great deal.  Perhaps 75% of his time is spent thinking and only 25% is spent typing.  There are probably still many  unanswered questions, but as the tool begins to take shape, some answers may start to become more obvious one by one.  The tool seems to practically design itself, but in reality, design is going on quite informally.

But wait!  What if one of those unanswered questions causes a serious problem?  What if the best answer to that question requires a rewrite of major portions of current code-base?   What if the other possible answers are so undesirable, that the rewrite actually appears to be the best option.  Because the design had been postponed until the code was in the process of being written, redesign is now very expensive.  Code that has been written will go to waste, and new effort must be exerted to replace it.

If the programmer takes this problem to a (non-technical) manager, concerned with cost and speed of developing the tool, the manager may come to the very justifiable conclusion that a rewrite is not the way to go.  Instead, just make a work-around for this one problem, in other words, a hack. 

As long as that’s the end of the story, then that’s probably okay.  Unfortunately, more issues may arise, with similar outcomes.  Also, once delivered, the end users will likely have feedback.  After all, without any formal design process, many of their needs/wants/concerns went unheard.  And now the real fun begins.

It’s already been established that the quickest solution is more desirable than better architecture and code, and so as feature requests are delivered to the programmer from the users, more and more workarounds are put into place to deliver a new tool quickly.  This leads to code that is difficult to maintain and potentially very buggy.

The end users are now saddled with a tool that does basically what they want but perhaps has stability or performance problems.  The difficulty  to fix those issues increases as time goes on as the code becomes more brittle and spaghetti-like.  Fixing one thing breaks something else, leading to a never-ending maintenance cycle that really makes no net improvement whatsoever.