Enterprise UX Design
Case study: Improving speed and accuracy of hotline support calls.
“During a User Observation session, I discovered the unnecessary amount of steps hotline had to take to START problem-solving. We ended up saving our Hotline several minutes (and MANY clicks) every call. Over a week, month or year it adds up to a lot of hours spent on something more useful!”
Hotline department spent a big part of their time finding to the right information before starting to evaluate the issue. In a large system with a lot of data, users had many roads to take to solve an issue.
Our goal was to 1) reduce the time spent on finding to the right data and 2) increase efficiency once they found it. My role was to research, synthesize and create a solution to present to stakeholders and developers.
We knew customers call hotline with a large range of issues that require very different approach. Our hypothesis was that there is a set of data that would cover 70% of the calls and it would reduce the average time spent greatly.
TOMRA Connect is the internal system used to maintain the fleet of more than 70 000 machines, in 50 countries and thousands of users. The hotline department in every market receive a lot of calls from customers about all kinds of issues with the machines. Each call has to be evaluated on it is complexity: Is it solvable remotely? Do we need to send a technician? Does it require a technical expert?
Before that evaluation is possible, they need to find in which area of the machine the issue is. They could spend up to 30 minutes without finding where the problem is before sending it further down the line. After the preliminary research with multiple markets showing the same unnecessary time spending we decided to improve the efficiency for hotline personnel.
- Hotline spend time looking for the correct data instead of analyzing the data.
- A great percentage of calls go through without correct or any evaluation causing unnecesary pressure further down the line or unnecessary travel for technicians.
- Faster conclusions on where the issue originates and who to best solve it.
- Make it more efficient to solve issues on a singular level as well as a fleet level.
- Synchronize the wording and problem-solving across markets to build a knowledge base.
- Combine data to calculate a health status automatically to catch the most common issues.
- Less time spent on calls will free up more time for hotline to solve issues before customer calls.
- Increase efficiency in finding, fixing, and reporting issues.
- Decrease the risk of errors, as well as minimize the damage of errors
- Building a knowledge base will help spread and optimize problem-solving for all markets.
Finding the most value for the least effort
We extracted data with lists on most common problems and average duration. Comparing the list based on country to find patterns on where the common problems are more common to find extremes.
To cover the most ground effeciently we narrowed it down to 3 countries with highest concentration of selected problems. We planned research on each location with 3-5 candidates.
I performed interviews, user tests and observations with all candidates from each country to find similarities and to map their problem-solving proceses.
All countries have different process of solving problems due to culture, regulations, capabilities, and geographical conditions. Some countries had to be better at some problems than others, even if it was just a few seconds here and there. With 80/20 rule in mind, how can we catch these differences? I had to find a way to pick and choose.
With the help from the team and technical experts I was able analyze all the problems to see which took the longest time to find and solve. With that information I created groups of the most common problems and checked which country had most of each group and tried to find the extremes. Now I had a significantly smaller group of candidates to pick from that are more likely to have found the quickest way.
I reduced it to 3 countries with hotline departments of 20-50 people each. My next step was to talk to the team leaders in these country departments to find the best candidates for interviews and observations. We found 3-5 candidates in selected countries that I could observe, interview, and conduct user tests with predefined scenarios.
User tests & interviews
I spent 2 days in each office, spending half a day or a full day with the candidates. I would sit in on their calls and following their actions on screen and off. Notes people take reveal a whole lot about how they work! During downtime we performed tests on different scenarios, and they could explain out loud what they did.
Connecting the dots all over the world
We were able to find the metrics they use to find clues on how to approach the problem, split in to hardware and software. We grouped metrics by a) which level of support b) Complexity c) Impact. Boiled the proceses down to one per problem to test viability with our existing system.
One of the issues we faced was the difference in responsibility between countries. Some countries have less technical knowledge then others. The metrics to solve complex problems require a broader knowledge of the machine than some countries have on their first line support.
We discovered the need to have temporary labels on machines to avoid that others start to investigate as well. E.g. if a machine goes offline & online repeatedly due to a issue in a store.
We saw the different use of identifiers between candidates in the same office. Some use IP-address, serial number and others by store name to find the correct machine.
After analyzing the data, I compiled clusters based on requirements in the country, severity of the problem, software, and hardware. I created large sets of flowcharts based on the research and from that, I could find clusters to explain the processes of how they look and understand the data.
They had found ways of determining how valuable one metric was, in the slightest difference, depending on the case. In a sense, they answered which data should they start to look at to rule out certain areas or find areas to focus on.
To make more sense of the information we grouped the proceses in three categories:
We selected solutions with overlapping to check how viable they are on a global scale. We had one process per problem to start testing on our existing system and to create prototypes.
- Level of support: What level of support can use the solution depending on technical knowledge.
- Complexity: likely to require software update or hardware check.
- Impact: Is it a high risk or does the machine become unavailable.
During the observations we had other interesting discoverings. The notes they took gave a lot of information about what they felt they needed to know, like an IP-address, dates or remembering a machine they need to check on later. Finding patterns among notes gave strong indications on lack of information or low discoverability of information. It turned out that it was very common for Hotline employees to write down specific machines that had a suspicious behavior. Either they wanted to remember to check on it later or something was about to happen to it, like if the store would turn it off because of a temporary situation in the store.
The difference of roles among countries made it difficult to compare the compiled data. Some have up to 5 lines of support, and because of that created a sense of hurry for the first line. While the countries with just 2 lines had a more technical know-how and could go much deeper before sending it down the line. Both works well in their areas but the problem solving solution we were trying to create would ultimately be different and not unified.
Prototyping a problem-solver for problem-solvers
I created a series prototype components with the problem-solving user flow and sets of data. I conducted user test remotely on different markets and went through a couple of iterations before we started development.
We did not add new information, we made it more visual. It gave all users important information at a glance and more accessible in-depth analysis. The solution was greatly appreciated by all users and have increased the efficiency for Hotline support, as well as Technical experts.
We added a Label feature to make it possible for users to add a temporary message on each machine.
The right thing at the right place at the right time
We changed the structure of Identifiers to make the userflow unified.
We were able to combine and present the right data to the users early in the problem-discovery phase. Increasing the efficiency from when a call comes in, to the start of problem-solving. The solution is appreciated by not only hotline, but by everyone working on the machines. As a bonus, sales and marketing find it useful!
By finding the best problem-solvers within the organization in different markets and looking at their quickest way to start analyzing the issue, we managed to find a set of data that caught most cases in a more effective way. Small changes on big scale systems can have a huge impact, in both negative and positive ways.
Digitization of scribbles and important notes
Another simplified prototype
From the categories of the notes we could make out 2 large groups: Temporary messaging and Identifiers.
Temporary messaging was used to put a pin on a machine to check back to see if the problem is back and to remind others of a situation not relating to the machine itself. We added an option to label a machine with a note and a 2-color system. Blue for "Information" and red for "Warning".
Identifiers was the way users would follow a machine or report machines. It could be by the store name and model, IP-address, or the serial number. The use of different identifiers was depending on how reports came in, how the stores talked about machines or misunderstandings. We added better structure to each of the identifiers and clarified the meaning of each.