Improve an AI smart symptom cheker

Timeframe: 2 weeks - My role: Design researcher

  1. Problem 

  2. My Process

  3. Findings

  4. Next steps

  5. What I learnt

problem

The AI team introduced a new feature in hope of keeping the users engaged. The feature itself was an autocomplete tool which helped users to add more accurate symptoms to the chat. We measured its success by the number of completed consultations. Nobody expected that it would end up having a negative impact.  After the feature was implemented the drop-off was 13% higher than before, and the reasons were not clear behind it.

My process

  • To understand the problem I set up a meeting with the medical lead to first understand the objectives, hypothesises, time frames. We created assumptions together and agreed on  what kind of outcome would make everyone happy. 

  • I also sat down with the data analyst, to understand the differences between before and after the feature was implemented. 

  • I have chosen user interviews to gain qualitative data using the Usertesting.com platform. I decided to run unmoderated tests, because of the tight time frame. 

    It allowed me to gain qualitative data. I combined it with a questionnaire to get some quantitative data as well.

  • I used affinity mapping to analyse users' feedback, and observation findings. Then I revisited the questionnaire results.

  • After I had my findings we sat down again with the data analyst and discussed our findings to discover patterns. It was my favourite part of the project.

Findings

What I found was eye opening for the stakeholders. User research has shown that framing symptoms is a pain point for users. So the idea to create this tool seemed the best solution. But what the research and data found out, was that the feature itself requires more mental effort from the users while it made the whole journey longer. It is because the users had to reframe their symptoms several times. The autocomplete feature  in its current form therefore didn’t help the user and caused drop-off.

During this research I also identified fundamental user experience and usability issues with the smart symptom checker that contributed to the problems above. 

Here are some examples, 

  • there is no guide on how to use the tool, 

  • users have no idea how long their journey will be, 

  • and the tone of voice was not aligned with the persona

Detailed Findings based on users feedback

Main Findings based on observation and data insight

Next steps

My recommendation was to investigate deeper as to the reasons for the drop-off. For example, which symptoms are more problematic? The data analyst created a list of recommended next steps to find answers to this questions. 

Meanwhile we identified a couple of quick wins with the medical lead and with the product designer:

  • copy changes to help users understand how to enter their symptoms. 

  • other copy changes to temporarily drive away users from the affected features

Then I recommended the following actions for improving User Experience:

  • Review the whole journey by the “10 Usability Heuristics “ 

  • do further research to find out users’ expectations

  • Run more specific competitor analysis on selected parts of the journey like the autocomplete section 

What I learned

  1. Data analytics and research make each other more powerful 

  2. It is important to show videos to stakeholders where users are struggling to use the tool, because it convinces them of the severity of the problem.

  3. Working in a cross-functional team where different squads use different methodologies (e.g. agile versus waterfall) presents additional challenges but leads to better communication eventually.