Design Validation: Harnessing the power of user interviews to improve UX

Humera Fatima
Zeta Design
Published in
7 min readFeb 8, 2022

--

Illustration by Aakansha Menon

A good user experience is not easy to create in a single shot. Designers mostly depend on insights from preliminary research and intuition as they start working on a project.

In addition, almost all design teams deliver under a time crunch making it impossible to test and validate their work in time before getting it out there to real users.

This blog is about our recent experience conducting a research activity to validate our employee benefits app, which was revamped with a completely new Information Architecture, and its approach towards how users manage benefits offered by their employer. Last year when this product was shipped, it was without much validation from real users due to time constraints. In August last year, my teammate Bharat Apat and I had some time on our hands, so we decided to test it.

Kick-off

We were looking at the complete application with various journeys, so it was obvious that a single method of validation would not help us in our exercise. We read about different ways to validate designs in this article: Product Design Methods In A Mind Map. This is what made the project even more exciting. So, we decided to first explore all the different ways validation testing can be done.

We ideated a list of research methodologies, that is, peer reviews, user interviews, or conduct surveys and polls to get a rudimentary understanding of what people feel about the product.

We spoke to some of our team members, about research activities they had done in the past, their takeaways, and the process they followed. Based on our discussions, we realized that we were not looking at metrics such as the number of clicks on a sign-up button or heat maps on a catalog page. We wanted to validate whether there was any problem to be solved at all. Our key metric was making information easy to understand so users make the right decisions while contributing a sum from their salary to a benefit. It was at this point that we realized we could only get insights about this experience via user interviews. The conversion of feedback into actionable insights was the essence of this exercise, which we will talk about in the later sections.

Action Plan

It was one of our first user research projects and given the number of resources it takes to conduct such activity and derive conclusive insights, we wanted to start small. We drew a timeline of activities to get an idea of what is the most efficient way of getting insights and avoid being very disruptive to the existing solutions.

It also allowed us to not dwell on feedback for too long.

For user interviews, we prioritized user-flows based on the following factors-

  • Unconventional UI patterns: Components or interactions that are not OS specific and are new to users who may or may not perform very well
  • Low affordance interfaces: Actions/Clickable areas that do not give enough tap affordance to the user
  • Known issues: Any known issues were also validated and prioritized
  • High frequency flows: Most frequently used flows were prioritized and tested
  • Feedback to users post any action: User journeys where user feedback was not very clear to users

The shortlisted flows consisted of 7 tasks that confine the sessions to no longer than an hour as the candidates might lose interest in the activity. These tasks were context-based in which the user would read a real-world jobs-to-be-done scenario and try to complete the tasks using the prototype.

User Interviews

To validate any journey for new users, we were looking for candidates who had very little familiarity with the product. As preached by Don Norman in this article “Why You Only Need To Test With 5 Users”, we decided to test our flows in 2 phases with 5 users per phase. Iterations were made based on the insights from phase 1 and tested in phase 2.

Our process in each phase. Illustrations by Aakansha Menon

We created a script for user interviews that would help us conduct a streamlined session. The objective was to initiate a casual conversation with the candidates to make them comfortable to ask any questions, set the context about the activity before we asked them to perform any task, and create a rapport so that they do not hesitate to give honest feedback.

Key takeaways from the user interviews

  • Make a note of all places you want feedback from users to control the discussions around the sections you need feedback on and keep the session within the expected time frame.
  • Ask open-ended questions like “what did you think of this flow”, “what are your thoughts about the information on this page” or if the user feels lost ask them “what do you think just happened” or ask them to trace their steps and make sense of what happened. Let the users dictate the direction, and as an interviewer, notice and prompt questions without giving too much information.
  • Once the task is complete, ask users to walk you through their thought process, why they clicked on option A vs B. Ask them what would they think would happen if they clicked on B. You could assess how your text and iconography are helping users navigate around the app.
  • Notice events like long decision-making periods, the user going back and forth in the user journey.
  • Have follow-up questions prepared, be prepared for any suggestions that you might get. You may or may not incorporate them in your designs, but be sure to let your candidates know that you are taking them seriously.
  • Iterate quickly. If, after only 2 out of 5 interviews, you realize one of your flows is not working, change it. Iterate on the design and then test the solution with the remaining candidates. It will help you save time and effort by giving you results quickly.
  • Once you have your feedback in hand, break it down into smaller chunks. Why? Because users will only be able to show you the aftereffect of a problem. Most of the time, it will be up to you to do the route cause analysis. Secondly, negative feedback could be a design or product problem, but it could also arise from the limitations of a prototype that you gave your users to perform the tasks. So be mindful of these limitations and do not dwell on them while working on solutions. Eg.,: Swipe action in a mobile prototype being tested on a desktop will not work as smoothly as it would in a mobile device, and users will have difficulty using the swiping gesture. If your users can identify the affordance for swipe action, you can look away from this device limitation.
  • When your user says I did not expect this card to be present here, ask them Why?. Ask your question in such a way that the user can tell you where the user journey broke down, what was expected vs what happened, and does this expectation meet the real-world use-case?
  • Sometimes you would also find that flows that you thought were a problem, were not an actual problem for the users. Here you go, validation received!

Converting feedback into actionable insights

We listed down the feedback from all participants and mapped them to the user flows and screens. This helped us bring together multiple feedbacks for the same feature and identify patterns and weed out any biases from the candidate’s side.

Recording user feedback

We prioritized patterns that were common across 3 or more users.

Breaking down feedback

Key takeaways

  • All users loved the new color scheme and component style.
  • Text on some CTAs was not clear enough to indicate what happens next.
  • We found out that some of the new features added great value to users but they were not given enough access points from the homepage.
  • Consistency in components across different use-cases was needed.

Additionally, we conducted a heuristic evaluation using Don Norman’s 10 Heuristic Principles across all flows and components. We constantly A/B tested the UI styles and copy and maintained them separately from the interview prototypes so we could carry out these activities independently.

We also wanted to do an accessibility check throughout the design system based on the feedback from interviews and some evident inconsistencies. But carrying out all three activities together would not have been very fruitful. So we prioritized the design system revamp for a later stage.

SUS Scores
SUS score for both these phases was calculated. We reached an amazing 88 in Phase 2! 🙌🏻

Conclusion

Final report for stakeholder presentation

The main goal of our research activity was to add these findings into the product backlog and get them implemented. So we invited product managers to our brainstorming sessions. The PMs were included in all design reviews to ensure we were not going out of scope. The final output and designs were presented to all product managers for prioritizing action items in the backlog.

To summarise, this activity helped us understand that there are many ways to validate our design, and finding the right method is key to conclusive user research.

— — — — — — — — — — — — — — — — — — — — — — — — — — — —

Shout out to Bharat Apat for conducting this research with me, and Kshitij Pandey for guiding us the entire way. Special mention to Preethi Shreeya for helping me out with this blog.

--

--