City of Seattle IT Service Hub
My team was commissioned by the City of Seattle to redesign and improve the language and user experience of their IT Service Management System (ITSM)
Our Client
City of Seattle IT Service
My Role
UX Researcher
The Project:
IT Service Hub Redesign
Project Duration:
2.5 Weeks
Project Team:
Peter Thompson - Project Manager
Julia Lu - Interaction Designer
Persona:
Contribution
Stakeholder Interviews
Stakeholder Mapping
User Interviews
Competitive Analysis
Comparative Analysis
User Persona
UX Writing
Usability testing
Design Performance Metrics
Tool
Pen & Paper
Whiteboard
iMovie
Otter.ai
Word Cloud
Sketch
City employees without prior IT knowledge have a difficult time understanding the flow, content, and language of the homepage and Request Offerings (RO) on the IT Service Hub. This causes users to deviate from using the IT Service Hub and choose to call the IT help desk for support.
“I am unable to find what I need!”
The challenge
Through interviews, usability testing, and iterations, I developed voice and tone in order to translate the language to be more user-centric. My team redesigned the homepage format, changed icons, color pallet and filers. All of these changes improved the average score of user experience on the System Usability Scale by 49.3%.
User experience improved by 49.3%
Our results
My team was commissioned by the City of Seattle to evaluate and improve the language and user experience of their IT Service Management System (ITSM).
Background
Our client’s goals were to improve the intuitive nature of the IT Service Hub homepage, develop a client-centric nomenclature to guide users in finding what they need, and explore new colors, icons and filters that fit the mental models of the 13,000 users.
Objective
I utilized my experience in ethnography to develop my research strategy on this project. I wanted to explore the personal side of the users I was working with before I started with interviews, which I found allowed users to relax and open up.
Research
methodology
I applied two main methods of UX research—user interviews and usability testing—to explore and uncover the underlying issues and frustrations the City of Seattle staff members were facing while using the IT Service Hub.
“User interviews with 12 individuals from six different departments.”
To validate the UX research provided by the City of Seattle's internal UX team and explore further issues faced by staff members, I conducted 12 interviews with different individuals from six different departments.
Interview with Seattle Public Utility staff
From these interviews I uncovered four main questions that I needed to answer and validate through usability testing.
Where exactly do individuals struggle in using the Service Hub?
Why do people become frustrated with the IT Service Hub?
How do we or do we need to consolidate request offering?
What elements of the language are confusing?
Before rewriting the language I researched and evaluated other competitive internal management system software. As these are internally facing, I was limited to the homepage and service catalog. This exploration gave me a substantial glimpse into how other softwares have bridged the gap between the IT services and their primary users.
Once the pain points of the users were established I conducted very focused interviews with the IT management team to simplify the IT-centric language of the top 15 most used request offerings (RO) in the Service Catalog. This was to tease out the intent of each RO, and help me understand the meanings.
Our tone of voice offers a foundation of three core elements which will unify writing across the IT Hub portal. It is designed to inform the user in a conversational manner to assist them in accomplishing their tasks and goals intuitively. The intent of this voice is to bridge the gap between the formal language of the IT hub and the individual mental model of its users.
To establish a baseline user experience and gage expectations I had five screened users with tangential experience using internal management.
To test language simplification we had users been exposed to the service hub but have had tangential experience with other internal Management systems. I had five individuals test the original prototype. I also had one Amazon employee test the system as they have also used internal ticketing systems for work.
Baseline testing
I developed a brand voice informed by the comparative and competitive analysis of other internal management systems and customer facing help desks. As our users are feeling unheard and that the service hub is too hard to use, I wanted the language to establish trust, be clear, concise, helpful, and receptive. This allowed us to create a structure for rewriting the ROs to test with.
Developing
a voice
Thoughtful: A balance between formal titles and informative descriptions.
Simple & Direct: Writing is straight-forward and easy to understand.
Deliberate & Consistent: Words are used intentionally and users can count on the language provided to inform them.
Core elements
of the voice
I then developed a research strategy with defined deliverables that fit within the scope of the project time frame.
By changing the language of the platform from IT-centric to client-centric, more users will be able to find what they want in the service catalog as the experience becomes more intuitive. Thus, this change will encourage users to use the tool by increasing user familiarity and learnability.
Hypothesis
City employees without prior IT knowledge have a difficult time understanding the language of the Request Offerings (RO), which causes them to deviate from using the IT Service Hub.
Problem Statment
From our findings and the characteristics of our users, I developed a persona to humanize our users. This helped communicate our findings to the client as well.
Persona
Designed by Peter Thompson
I created a storyboard to illustrate the emotional arc users experience when using the IT Service Hub.
Storyboard
To establish a baseline with five users I created two task scenarios based on the most common used Request offerings. I wanted to explore the flow, expectations, and frustrations users face with the provided prototype.
Baseline
The primary findings for the IT Service Hub Homepage and Service Catalog were:
Inadequate Search Function
“I cannot find what I am searching for because I only know the ballpark of what I need.”
The search function was inadequate because the metadata in the search index was not built out and did not match users’ mental models when they are searching for what they need.
Validated that the language is not user centric or intuitive enough for them to accomplish their tasks smoothly.
Users have different interpretations of the word standard which causes confusion.
Ultimately, users cannot find the right service that they need and opt for calling.
Findings
“People get really confused with Purchase Standard Software, Request Software, and Request a Hardware or Software quote.”
Beyond the issues of language, users are facing several other challenges with the IT Services that they felt diminished their trust in the system. I felt it was important to outline these additional pain points:
Beyond
the Hub
When filling out service requests, users felt they didn’t have enough information up-front and would get deep into forms before finding they didn’t have permissions, or know what information they needed.
Users don’t feel that they get feedback on the status of their ticket and where it is in the process. Unless individuals have VIP access to the Hub.
Delayed, Unfulfilled, wrongly fulfilled requests. e.g. Network Jack activated at the wrong cubical.
“Nine times out of ten a new person can’t access their files because IT department didn’t pay attention to the details of what files that person needed access to.”
All of these things have made users lose faith in the system
Prototype (wireframe)
To evaluate the changes in language, I wrote up two task scenarios. The first task was designed to test the language and comprehension of how users experience finding the software they need. The second task was designed to test the confusions users were facing with requesting access to the City Network when working remotely.
After the prototype was completed I tested the language changes with 9 users.
Usability testing
To understand how well we solved the users problems with the IT Service Hub we calculated the average success rate of our two task scenarios.
Overall for the first task we had a 25% success rate for users. This means that 75% of the users were unable or stopped trying to finish their task.
For the second task 37.5% of users were successful which indicates that we have a long road ahead of us. It was important to note that we did not have an active search function in the prototype and the filters were not built out. With these elements completed the user may have an easier time finding what they need.
Measuring success
Final wireframes
To measure our success in improving the users experience we used the System Usability Scale.
We first established a baseline score with the five original users with the preliminary prototype. The baseline experience for users was on average 22.5%.
To give context to this score, on an academic grading scale it would be an F and ultimately not an acceptable experience.
After we implemented our changes to color, icons, and language and tested with seven users, the usability scores averaged 71.8%, which would on an academic scale would be a high C and falls in the acceptable margin.
We have made great strides in changing the user experience, but it also means there is a long way to go.
Findings
After we implemented our changes
What we Recommend
As we met with the client twice weekly, some of the recommendations were implemented early on in the research process. The first thing to go was the word “Standard” because it means different things to each department.
“I don't know what the distinguishing characteristics are between what is considered standard of the city and what is non standard.”
As established by the data provided in the task analysis, despite the language, icon, and color changes, users were still unable to distinguish and understand what software ROs they needed. We recommend creating an overarching RO encompassing all software and then guide users through the process of finding what they need.
To bridge the gap of the users mental models and the search index, we recommend expanding and adding targeted metadata to ROs to improve the search function.
Due to the time constraints of the project, we were unable to complete language edits for all 105 ROs. We recommend using the voice branding guide and more user testing to redefine the language on the remaining ROs.
The most important element about improving the user experience is to continue interviewing and iterating with users that have low to mid tech empathy.
Clients Feedback
Our team and our clients, the City of Seattle IT management team!