IGDA Tools SIG Roundtable

What a fantastic turnout we had at the Tools SIG Roundtable! I was expecting the worst with a 4pm time slot on the last day of GDC, but I was really blown away by attendance. We had no less that 20 people show up, and I think half of those folks are willing to commit a bit of time over the next year to help accomplish some projects. With each idea we gathered a show of hands to determine how much participation there would be for that project.

Here are the project ideas that were presented during the roundtable. Each project will need to find its way into the hands of a Steering Committee member. The SIG doesn’t currently have a Steering Committee, so as the SIG Chair I am going to ask for volunteers and appoint a committee from those volunteers. Each project will then be chosen by a committee member and it will be their job to push for progress on each project.

In descending order of participation:

Surveys

The biggest draw in terms of membership participation was to make a survey to gauge various tools usage across the industry (as well as other topics).

13 members expressed interest in undertaking this.

Tools Lectures at Conferences

There was a general perception that GDC 2013 had few tools related lectures, and almost everyone in the SIG wants to work toward changing this. Tools related talks were submitted for GDC 2013, but they were declined by the GDC Advisory Board. The SIG wishes to start a discussion with the GDC Advisory Board to advocate for more tools talks at GDC. The SIG will also do internal peer-review on submissions to increase quality.

9 members expressed interest in undertaking this.

Wiki

A Tools SIG wiki would be a warehouse of information that would be constantly curated by the SIG to hold reference information about best practices in tools. These topics are perennial at the Technical Issues in Tools roundtables at GDC, and by having this information available on the website we can focus more attention to trends and changes in the state of the art instead of calling out known solutions.

Also, the wiki would be an excellent resource to link to tools related lectures available on the GDC Vault (like the 2013 Crystal Dynamics Tools Postmortem), Twitch, Youtube, etc…

8 members expressed interest in undertaking this.

Software Projects

While networking with other attendees, both the IGDA Accessibility SIG as well as some game writers expressed that they suffer from a serious lack of tooling. They expressed interest in getting help from the Tools SIG to help implement some tools to help get their work done. I will be following up with some folks in the coming weeks to better define what these tools might look like.

5 members expressed interest in undertaking this.

Tools Video Tutorials/Reviews

This would take the form of Twitch or Youtube videos to review or evaluate tools.

2 members expressed interest in undertaking this.

News Coverage

The SIG should undertake journalistic reporting on tools related announcements, and analyze those announcements from a tools perspective.

Only a handful of attendees expressed interest, so any ground gained on this project will probably come via the Podcast.

Writing

As always, anyone can get access to the blog though Geoff. We can review drafts before being published to make sure quality is high.

After Hours Development

In the midst of discussing the above topics, the topic of how different companies view after hours work came up. Some companies are more lax than others, and some of the laws in California that protect workers against companies claiming ownership over their after hours work was discussed. Membership of the SIG expressed interest in advocating or working toward sharing information about where liability actually stands state-to-state. We want to get feedback from IGDA legal to explore how we might make progress on spreading information about this topic.

Thanks so much for everyone’s attendance! I will see you in the Discussion Groups :)

Technical Issues in Tools (Day 3)

Below are Aaron’s expanded notes from the Day 3 roundtable.

Content Creation Tools and Usability

  • Use controls and metaphors that are familiar to your content creators. For example, if you are a Maya shop, use Maya camera controls and key bindings for the 3D view of you editors.
  • Informal poll indicated that most teams are not creating art tools when they can use an off-the-shelf solution.
  • When considering a custom solution, make sure that the productivity gains over an off-the-shelf solution are enough of a win to justify. This may not just be time savings, there are things to consider like interruption to flow where straight time comparisons will not be the correct metric to use.
  • Considering open sourcing your tools as an attempt to push the industry in the direction of open source solutions.
  • Leverage your core competencies. If you are not a strong tool shop, consider other options like contracting your tool development.
  • Consider proceedural solutions to reduce the need to create content.
  • You have to answer this question for your situation, “When do you make vs. when do you buy?”.

Communication Across the Industry

  • IGDA Tools SIG. Toolsmiths mailing list and Google group.
  • Open Source Tools
  • Sony ATF – Sony’s internal tools made open. https://github.com/SonyWWS/ATF
  • Nocturnal Initiative – Insomniac’s tools made open. https://github.com/nocturnal
  • Helium Project – Open source tools (Geoff’s project). https://github.com/HeliumProject/Helium
  • Seeders vs. Leechers will be something to be aware of. This is probably more of a problem in larger corporate environments that evaluate studio performance on straight ROI which doesn’t take into account value of contributions to shared code.
  • A Microsoft employee reported a change in Microsoft’s position on open source code and is not encouraging it use internally when appropriate.
  • Microsoft codeplex as a place to look for code for tools. https://www.codeplex.com/
  • It seems that the sclaing issues of tools development (we need more and better tools to control content costs) is pushing these changes.
  • Having a culture of openness and honesty about the state of your tools will motivate investment in improvement.
  • Make Tools part of your studio’s story.

Features vs. Goals

  • As tool developers we frequently get feature specifications as opposed to a problem statement outlining the problem we are being asked to solve.
  • Keep asking “Why?” to drill down and determine core needs.
  • Empathize with your user’s workflows to understand what they are trying to accomplish.
  • Set reasonalbe per-iteration objectives and execute.
  • Don’t commit to doing the work until you get a goal to reach. Don’t give in and just blindly implement a request.

Referencing Assets

  • Use an asset database to look up unique identitifer to the location of the data you are looking for.
  • Guid based references allow each machine to generate unique ids without requiring a central repository.
  • Many studios are doing local syncs and build a local database for query optimization.
  • Model your dependencies and allow for reverse depenency inspection to determine if and where something is used.
  • Consider using the file system as the authority and your DB as an optimization.
  • How do you do depenency tracking?
    • Scan changed files.
    • Use file System notification to lauch dependency generation.
    • Use offline automated systems to look for and purge cruft from you system.
    • Model your package dependencies.
  • Use a “don’t break, don’t block” model to keep your users running.
  • Can you make the game run with no data? This is ideal.
  • You can use known-good default assets as stand-ins for broken data as opposed to holding your user’s up.

Asset Organization

  • Organize any way the users want. It is typically not a good idea to enforce an organization strategy.
  • Implement a tagging system for your assets to allow them to be orgainzed or queried in a logical way based on usage.
  • Beware of chaos in the tagging system.
  • Use case insensitive tagging and tag synonyms to keep from having data break apart with user generated tags.
  • Impelement top-level search criteria to wrangle organization.
  • Tag objects with unique identifier like user or machine name. Guids are frequently used in this situation.
  • Implement search tools. Help users find the content they want to use.
  • Allow the game ot select assets (though some kind of in-game UI like a pointer) and open them in your tool. Think hitting a button while playing a level and this opening the level in your editor.
  • Push data form the game into tools. This can be used to identify problems to be fixed. You could have a system that would implement workflows like change requests at the asset level.
  • Implement metadata tagging to remove ambiguity on errors.
  • Implement a system to find and remove duplicate assets.

Best Practices

  • Build in robust and automated crash reporting.
  • Use Perforce and use it well.
  • Support and use branching in both code and data. You have to make that E3 build don’t you?
  • Log4NET as a logging option.

Technical Issues in Tools (Day 2)

Below are Aaron’s expanded notes from the Day 2 roundtable.

Dealing Asset Respository Growth

  • 85% are using Perforce for dealing with content data.
  • Use max revisions to control repository size.
  • Consider using Linux as your Perforce Server platform as it handles scalability concerns better. If using Linux with Windows clients, use case insensitivity from the start to avoid issues.
  • Various user stories were presented on repository growth.
  • Offsite locations, forwarding replicas, edge servers are the new Perforce hotness. Proxies have use but are on the decline.

Dealing Asset Respository Growth (Gen4)

  • This question was specifically targetted at dealing with the size of assets that are being generated for Gen4 (PS4/XBone) consoles.
  • If using Unreal, keep package sizes as small as possible.
  • Most people are conditioning as much as possible on the client and not checking in built data.
  • 10% check in built data.
  • 30% are doing full nightly cooks of all assets for validation or to prime a cook cache.
  • Consider pushing cooked data from the client to the cache, but beware of dependency issues!
  • One studio mentioned that it would take them 1 month to clean build the data for their game.
  • Build and keep your dependency tree in memory as much as possible. DB would be an option here.
  • Write dependancy data on first build. Be sure to invalidate it appropriately.

Build Tools for Data

  • 10% use 3rd party solution.
  • 80% use a home grown solution.
  • ANT, Grant/Javascript, CruiseControl with MSBuild, Unity, Incredibuild were called out as solutions being used.

Cross-platform Data

  • 60% say their data build system support multiple platforms as a core concept.

Pipeline “Chugging”

  • How do you deal with the stalls and other implications of gating based on results of CI?
  • 60% or more have someone who wrangles their CIS system.
  • If your CIS system is crunching on data, make that apparent/inspectable by your users.
  • Make public or VERY PUBLIC who is breaking your builds.
  • If you find yourself in situations of repeated breakage, put your senior staff in charge. They can often apply the necessary leverage to communicate the importance of not breaking builds to your staff.
  • Beware of the implications of queuing your submits if you have some kind of automated validation on submit.
  • Very few teams are using a different data version for development than production. Many support single file loading as an iteration tool to mitigate this.
  • Reducing the quality of conditioned data was called out as one way to speed up iterative build times.

Continuous Deployment

  • Few do full deployment of tools and game as monolithic installer.
  • TeamCity generates artifacts that are web accessible.
  • Minority of teams have beta and release versions of tools.
  • One team does 2 week iterations and no teams indicated that they are on “Live” tools that have not passed some kind of validation.

Automated Testing

  • Outside of core systems there were some questions related to test falures due to changes to requirements that invalidate tests.
  • Test “things that don’t change”. I read into this as “test things that have stable requirements”.
  • Make sure that the people writing the code are also maintaining the tests.
  • Have QA modify test scripts.
  • Are issues with test failure an issue with the developer or somewhere else? Make sure that your devs are running your tests.
  • Some teams do a pre-submit validation where tests are run and reject on failure. Introduces build queueing!
  • One team locked the entire perforce depot on broken build. They said they had 48 devs. A lot of raised eyebrows and questions on this one.
  • Use staged builds in your CI system.
  • Make the team responsible for build stability.
  • Use nightly functionality testing.
  • Block builds when critical path (does game boot?) breaks.
  • How do you deal with non-deterministic systems? Kill non-determinism in your code with fire.
  • Use perforce file locks to indicate code that is under test to inform other users and prevent submit contention.

Scaling Automated Testing

  • What do you do when your platform has expensive dev hardware?
  • Use kits on the floor when they are idle.
  • Use PC build to test bulk of changes and test only what is necessary on target hardware.

Other Topics – Large Photoshop files. – Forwarding Replica and Edge servers (Perforce) – Compress files on server if you have the horsepower. – Use different (smaller) data formats. – Almost 100% of teams store source PSD files despite large sizes.

Technical Issues in Tools (Day 1)

The Technical Issues in Tools Development Roundtables continue to be one of the most popular roundtable sessions at GDC. Below are Aaron’s updated and expanded notes from the Day 1 session.

How do you develop or transition to new tools or frameworks?

  • Requirements gathering was reinforced as a primary need. We touched on the time-worn topic of goal vs. feature requests.
  • It appears that while may people still have tools and engine code running together (editor in runtime) many people have moved away from this. In a nutshell, the lack of abstraction between source data and runtime data that many engines suffer from is seen as a failure.
  • People encouraged developing your tools as close to your existing pipelines as possible. If it make sense for data to be edited in Maya, for example, you should write your tools there.
  • Usability continues to be a primary concern from our users. Click reduction and other ways of reducing friction in your tools is more important with each iteration.
  • Teams that have good relationships with their users express developing better tools through this interaction. Developers need to “eat their own dog food”.
  • Larger teams with more open tools are starting to employ TechArt and users to create tools. This can be accomplished though extensibility points like “scripting” or plugin based solutions. I thought of the macro recording system in VS as an example and something that could be accomplished with a well defined command architecture at the base level of the tools.
  • When teams allow for user created tools, there is no expectation of support, although there is consideration for useful tools and promoting them to fully supported versions.
  • Automation came up in this topic (more on it below), but design your tool with automation in mind from the ground up. Command line driving (for batch/script files) or being able to launch tools to specific configurations (open a tool with a specific document or screen up) were given as examples.

Automation

  • Using automation to reduce redundancy in testing was called out as a win for some studios.
  • When automating tests, target low-hanging fruit (game won’t launch) first. This reduces your overall testing overhead.
  • With properly automated tests, your QA focus can be on breaking the tool or game, not smoke testing.
  • Unit tests are employed, but mostly at the system library level. Mixed bag on if this is “Test First” development or validation testing.
  • One large studio ran into an issue where their testing data piled up before users were able to start analysis. This led to problems in responding to the testing and they recommended getting eyes on test data as soon as it is available to avoid piling up.
  • If your tool support automation, good user feedback is essential. Automating a dozen steps only to have it crap out on the end with a generic failure message is not productive.
  • While it may initially introduce multiple steps to accomplish an action, it is best practice not to overload functionality (big-red button). You can always automate multiple steps together later if necessary.
  • Most studios appear to be using a hybrid homegrown/3rd-party automation solution. Python->RPC was one solution. One studio is using Jenkins CI to schedule the automation testing.

Build It or Buy It?

  • As an industry we are still struggling to justify investment in tools development.
  • Communication and commitment scheduling are important as we scale to support multiple teams.
  • More testing and support were called out as pros for licensing software.
  • Bloat and stuff you don’t need were called out as cons.
  • One question presented that I thought was thought provoking was “Is your pipeline so unique as to necessitate custom tools?”
  • If you license code, it is important to socialize the understanding of the code to avoid problems using unfamiliar code.
  • Studios that did not have buy in from the top on either side of this question encountered issues when problems arose.
  • You must understand the complete cost of licensing a solution when committing to it. Integration, maintenance, extensibility, are all aspects of development that must be accounted for.

Interop

  • 80% of attendees commented on having to deal with some level of interop.
  • You have to be very mindful at design time (architecture level) of the performance implications of interop as well as have a good strategy to deal with it.
  • Some studios are doing automation assisted interop either with automation code generation for marshalling or some other approach like runtime reflection.
  • A few studios called out using sockets or RPC for interop. WCF has also been used. Most are doing some kind of C++/CLI with parallel interfaces. When I asked about orthoginal interfaces to abstract a different interface on the tools side from the runtime side, there were a lot of strange looks.

Unit Testing

  • 40% of attendees were doing some kind of unit testing.
  • 20% were using it for compliance testing.
  • NUnit, Python and VS Unit testing were called out as things being done.
  • A few shops are doing Test First development for tools.
  • Some do testing on the build server as a commit blocking check.

Build Systems

  • 40% are using Jenkins.
  • Team City, Buildforge, Cruise Control, Cruise, Go and homegrown solutions were called out.

OpenCL and Compute

  • Limited adoption of these technologies.
  • OpenCL is being used by one team in their collision generation pipeline.
  • When used, it was called out to provide real-time feedback to the user to avoid painful issues during development.

Version Control

  • 80% are using Perforce
  • Some studios are using Git for code.
  • Mercurial, SVN, Plastic SCN, Shotgun were called out as other solutions being used.
  • Object DBs were inquired about specifically for asset version control.
  • A few shops are still using some form of live data, but this seems to be both smaller shops or studios with a very different deployment methodology.
  • Bigger studios have started to use CI for their data and using a label or other method to sync last good data. We avoid this by requiring items to build before submit.

How do you know when your tools are bad?

  • Question asked at the start of the session but crammed into the last few minutes.
  • Listen to your users.
  • Callstacks in all crash reports.
  • Automated bug reporting.
  • Office hours to share the love of supporting the tools.
  • Eat your own dogfood/Follow your users. Schedule time every iteration to spend time with the users to gather data.
  • Automatic usage information from tools. Implement a system to record how your users use the tools to find bad practices, features that are not used. Get rid of features that no one uses to reduce maintanance overhead.
  • Features vs. Goals. Work with your users to solve problems, don’t just blindly implement requests.

We have launched a podcast!

The Toolsmiths is happy to announce we are starting a podcast! It’s hosted by Geoff Evans (SIG Chair and engineer at Kojima Productions), and David Lightbown (UX Director at Ubisoft Montreal). Our first guest is Dan Goodman from Robotic Arm Software.

You can view all the episodes on the web at thetoolsmiths.org/podcast

The full podcast feed is thetoolsmiths.org/?feed=podcast

GDC 2014 Roundtables

There are four tools-centric roundtables at GDC this year. As always, there is a Technical Issues in Tools roundtable each day of the main conference, but this year there is dedicated roundtable to discuss Tools SIG issues! Plan on making it to at least one of the Technical Issues in Tools roundtables, and make sure to come and check out the SIG Roundtable on Friday! New projects are afoot in the Tools SIG, and we need your help to push for more sharing in the tools community!

Here are the session times:

Wednesday 3/19:
Technical Issues in Tools at 11am

Thursday 3/20:
Technical Issues in Tools at 5:30pm

Friday 3/21:
Technical Issues in Tools at 10am
Tools SIG Roundtable at 4pm