UX design for internal tools — a UX case study
Context
Tapps Games is a software company with a big focus on mobile gaming. They have fast development cycles and staggered game releases, with a portfolio of over 400 products.

The company uses Google Drive extensively in its asset pipeline — localization strings, databases, audio file information and other data are extracted from spreadsheets which are shared among internal teams and with external contractors.
Production must keep the pace even as games grow more complex — at the same time, most of the portfolio needs to be kept current with new devices and new app store policies, meaning old releases are often revisited and readjusted.
Previous work
Individual Python scripts were available for a few common data import tasks. Due to time constraints, the scripts had no documentation, were stored locally and had no version control. Input data format and folder structure differed a lot from script to script.
I first charted the workflow, as presented below, in this hellish flowchart:

Bear in mind that not everyone had the know-how to fix game projects and much less the importers — non-programmers often had to manually correct data in .json files. That was extremely error prone (missing commas, missing quotes), took a lot of their time and caused data to diverge between the spreadsheets and the released products.
Timesinks
I initially conducted interviews and did user testing with 10 different colleagues, with a mix of Game Designers and Programmers. People took, in average, 3 minutes to complete the process in the flowchart — assuming there were no errors.
In cases where things did go wrong, it took at least 10 more minutes to complete the data import — in one of the cases we had to stop the interview to fix a problem in the database importer, which took a couple of hours.
Fragmentation
Since scripts were local and the development pace was brisk, it was common to have changes made to an importer in one project without ever any of it making back to the central server. This caused a lot of confusion as to which version should be used as well as a lot of duplication of effort.
Bugs would happen in one project but not another — some projects could handle certain languages, others could not. Some had bugs that could destroy databases, others didn’t. QA-ing the differences was extremely time consuming and difficult.
Mix this with time critical releases and you have a recipe for overtime and burnout.
Asking the right questions
What can be done at the users technical level?
Game designers and programmers alike were comfortable using the command line, which meant that we could be really lean with the first attempt at solving the problem and grow from there.
What are the time constraints?
I had very little time to come up with an initial solution —it was the beginning of the year and production was picking up pace. I had, at most, one week. Given the previous work in Python, it made sense to iterate on it and improve the inner workings gradually while still offering a much improved workflow.
Who should be the source of truth?
The answer to this was centralizing all scripts within a single tool. This tool was versioned and used Git to keep track of changes. This meant that we could test changes in different branches and collaborate with ease, without compromising things in production.
Who should be the source of knowledge?
I knew the intricacies of the tool but I didn’t want any dependencies on specific people. Documentation was essential — not only for improving and adding to it but to facilitate onboarding.
Should I leave for vacation, all the needed information was available on Confluence and from within the tool itself.
How can we inform the user?
A lot of work went into making sure that any relevant action was properly informed to the user. Just because it is the command line doesn’t mean that we can eschew graphic design.
Steps were color coded, ascii art was used to create tables of information, charts and other things. This allowed people to easily copy something and share with the team, since it was simply text.
How should errors be handled?
The tool would perform verifications and inform the user of problems, like so:
Key popup_vendor_bonus is missing strings for: PT-BR, FR
With this, users no longer needed to launch the project and manually navigate, once per language, to the screen with the “popup_vendor_bonus” — they already knew they were missing.
This allowed for much tighter iteration cycles, avoiding broken builds and decreasing QA times significantly.
How should it be delivered?
Installation was as simple as cloning the tool repository and adding it to the user path. This was all documented in the company’s Confluence.
By using Git, updates were automatically installed when they were merged with the master branch, removing the need to ask people if you had the correct version.
How will we measure improvement?
Part of selling changes in workflow is convincing others that they matter and that they will have a positive impact.
A unified tool allowed us to add analytics tracking, which, in turn, allowed us to calculate how much time was saved every week with the improvements we made.
Embedding analytics also allowed us to prioritize certain areas of the tool and to anticipate problems based on diverging hardware, software and workflows.
User happiness was also a big, albeit subjective, measure. People could submit feedback straight from the command line and user interviews were held in a before-after fashion, testing the new tool by itself and side by side with the old scripts.
What should the user be responsible for?
The user should only be responsible for ensuring that the tool configuration for a project is as they want. Since this was usually stable — meaning a spreadsheet was used and improved upon for the life of a project — users seldom had to do anything different, they could just tell the tool to import data and be done with it.
When the configuration was missing, the tool would talk the person through the process of adding it, asking for the field and providing guidance.
All the file handling was automatted.
How can we grow our tools to serve our teams?
Our focus here was in making collaboration as easy as possible. By lowering the barrier of entry with documentation (api reference, tutorials, workshops, articles on motivations…) and making the code readily accessible and testable, we made it so people could look at and understand the tools from day one.
Aftermath
For properly configured projects, the process now took only 3 steps (versus 8) and the time it took for completion dropped from an average of 3 minutes to 20 seconds.

Having collect years of analytics and feedback, it is safe to say that not only the end result was good for production (estimates were that we saved about 48 hours per month) but it was good for people.
My peers were finally comfortable creating new tools and improving what was already there. Those who were afraid of messing things up or dealing with hostile, zero-information systems were now feeling confident in their work. Beginners were adapting to it in a matter of minutes.
I knew we were on the right track when I got the following feedback:
I loved it! I was terrified of the task but the tool was super kind and I was like “What? It is done already?”, it was painless!
Highest praise I’ve received.