Sales had been bugging the product team for years – make the questionnaire shorter. No one likes filling out forms. And medical forms top the list of things people do not want to do. A life insurance application form may top the list of activities people least want to spend time completing. Finally, the product team conducted an analysis on the completion rates of their life insurance application form. An astounding 55% of potential customers did not complete their application. The company’s most valuable product was being undercut by over half.
The application process had three stages: requesting access to the online application form, completing the application, and the approval process. The badgering from sales led the product team to start with the second stage – completing the application – where about 10% of the applicants were dropping out prior to completion.
These types of projects are called diagnostic analytics. The purpose is to diagnose a problem and propose a course of action. Therefore, the solution itself is akin to orienteering in the dark, you have a compass, a flashlight, a map, and several other tools at your disposal; but it is wandering through the forest in the dark. We did, however, highlight a couple of major actions we knew we would need to take:
The first visualization of the data we created was a graph of how many people who started the application completed each individual question. After the first question, there was a small drop. After questions 6 & 7, there were noticeable but minor drops. After question 25 there was a significant drop of about 10%. Then, the most bizarre part of the graph. Even the final question had 60% of the entire population completed the question.
This was not as much an insight as it was a shock. Something was wrong; very, very wrong.
As I mentioned in the solution section, I wondered if there was some sort of sampling bias. It would be the most logical explanation for such an odd disparity. Using the data from those who completed the questionnaire, the team built a simple model to predict which applicants were likely to be approved for life insurance. The model was reasonably accurate, so we ran the population who completed most of the questionnaire but did not submit it through it. The model approved 82% of the applicants who did not submit the application. A statistically insignificant difference from the completed population. From the best we could tell, there was no medical difference between those who completed the application and those who did not.
During the final presentation, I presented the SVP in charge of the product three cohorts of people who should be the focus of marketing efforts to nudge them into completing the questionnaire. The three cohorts represented an opportunity to engage with about 50% of the lost customers. An estimated value of $6.1MM a year. For a $100,000 project, an ROI of 61x has to be considered a raging success.
But it was a bit of a red herring. I did not believe these three cohorts were, in fact, the problem. Over the course of my analysis, I kept coming back to the idea that the large drop-offs were not after individual questions, but after page changes in the system. Then, I began to think about the incentives of the customers. They say they want a product – 30% never enter the system. Another 15% drop out before they start the questionnaire. Finally, six of every ten lost applicants drop out after completing the questionnaire, and only 4% drop out after individual questions. These people want the product, but they are incentivized not to complete the questionnaire. The only rational explanation is that there is a User Interface (UI) problem.
I presented my logic to the SVP, and he added a chat bot to collect data about the quality of the UI as well as the overall User Experience (UX). I was right. The UI was awful, and it was frustrating the users to the point of not even starting the application in many cases. The ROI was not 61x, it was 350x. The UI problem was costing them $35MM, annually.
In 2023, I saw some research focused on tracking the value of brainstorming. The consensus is it is not very useful. People are too biased towards their own preconceived notions, and the audience locks onto one idea much earlier than you would expect. I agree with the basic premise research, and I have seen the bias and anchoring myself, firsthand. However, if you are aware of these two gravitational forces, the information can still be valuable.
My first exercise in leading this project was to brainstorm on reasons why the leaders thought the drop-out rate would be so high. I started with a basic idea. We need big buckets of concepts that can be differentiated. I belive we created six buckets, forcing the audience to focus on one idea at a time. People would be allowed to talk about their pet theories, but only for a limited time.
Second, I knew that it was a biased list. That does not mean it is not valuable. During that meeting, I generated about 100 different ideas and starting points to go “pull on threads” inside the data. Their input during the brainstorming gave me a starting point. Furthermore, it gave me insights into the minds of the stakeholders. Specifically, it told me, “What questions do I need to be able to answer in order for the research to be taken seriously?” If any of ideas were lingered on, it meant that I needed to spend time proving or disproving a particular hypothesis to ensure the team had credibility.
Explore the pitfalls of the one-off illusion in decision-making. Learn how treating decisions as isolated events can lead to systemic inefficiencies, short-termism, and misalignment with long-term goals, and discover strategies to foster intentional, interconnected decision-making.
Traditional linear change management models fall short in addressing employee concerns. A continuous loop approach focuses on ongoing engagement and education to incrementally diminish worries.
Embark on a transformative journey with our latest post, guiding SMBs on how to use Generative AI in business effectively. Discover the pivotal first step in crafting a successful Generative AI strategy, setting the stage for innovation and growth. Stay tuned for deep insights and actionable strategies.