The project was to refresh a web tool for administrators. One of the major problems we had at the company was customer support costs. The more who called in, the more people we needed. So one of the tactics to deal with that was to reduce the frictions in using the product in the first place. This particular tool allowed our users assign training to their colleagues in a company. One of the oldest pieces of functionality in the software suite. The project was given a release cycle to complete.
Admittedly, both the product manager and I were pretty new to the company. It was one of the first projects we had. So we first wanted to learn about the tool. Simple things just dove in, and started clicking. Again, one of the oldest pieces of code in the system, and even had the old UI.
We then started talking to developers to figure out how it was built. Turns out: terribly. It had seventeen years of features bolted on to it. Then we started talking to our clients, and our internal users. There were a lot of gripes about the tool. “Refreshing” the tool seemed more and more unlikely without some major surgery.
The first fork in the road. We could just apply some new visual styling, and be done on-time. Or we could take the hard path. Where we go to the higher-ups, and ask for a project extension. We gathered the feedback we had so far. The PM took it up the chain of command, and then we waited.
A bit of a surprise, but we got the extension. So now the real work begins.
We worked out a multi-pronged approach to research. Because as we dug in, we realized that the existing user research was rather lacking. A lot of the work would be just to get the basics.
Almost all of was generative research. Open questions about their team structure, how they worked together, recurring projects. Then how they used the existing tool today. The survey was broad and sent to almost all the admins. The goal was to get a snapshot of their day-to-day.
It was amazing to let them just talk about how they used the tool. One of the consistent things was the novel ways they “hacked” or exploited loopholes in the code to do the things they needed.
After looking over all the feedback and interviews, the PM and I started forming some early principles to stick to.
From this base of observations, I started sketching and thinking about overhauling the whole experience. Two key groups kept me grounded. The PM stayed conservative for “change management” (The cost of our users having to relearn just a little or a lot.) While the devs informed me of how the underlying code worked.
Constraint: We wouldn’t have enough time to completely overhaul the “engine” of the tool. That would have added months and months of testing for every edge case. So the devs would tell me when part of the code could not be touched at all. (Without adding months to the project.)
That said, all the features were documented and the I started shuffling them around. Regrouping based categories, and hiding the ones that were only relevant to edge cases.
Ideally, this would have been more lo-fi, but we have a lot of non-visual stakeholders and this tool is uniquely complex. So we leveraged our pattern library and used Invision for clickable prototypes.
Major milestones or forks in the road would meant I made a fresh prototype -- so we could compare the before-and-after.
We showed the prototypes to clients to get feedback. It was refreshing positive. Which let us channel that up the stakeholders to let them know that the project extension was worth the investment.
Development went pretty smoothly. All the backend devs knew what was coming, and we had a tight understanding of what was feasible. As for the frontend, I tried to check in frequently to see what they needed or make tweaks. The system is fickle to new UI, so we were never quite sure how the framework would react to the new stuff.
A thing I like to do because often visual quirks get lost in the release scramble. A cross-team meeting is called, and we look at every page in the project. Often how the page loads, or a little UI would be off, and the experience would suddenly look sloppy. It’s hard for non-visual team members to spot them. So it’s good for me to see the project before it ships. But also it’s a good chance for the team to estimate how hard it’d be to change it. It keeps the team aligned to do it together. (There is a time cost, however, it doesn’t always scale to every project.)
This was a major tool that almost every client uses. So the team was nervous (but confident) about the release. So when it launched, there was relative silence about it. Which is good. Because it meant our clients were actually using it, and not calling us to complain. After a few months, we informally checked with customer support. Calls about the tool were initially higher because it was so new, but eventually the call volume actually went lower.