Blog

Thought Leadership Series: Prove Yourself Wrong (I Dare You)

1 year ago

This blog post is part of our ResultsCX thought leadership series. Its author, Dr. Jim Sullivan, leads the Enterprise Business Intelligence operation at ResultsCX, which employs advanced analysis to identify and eliminate root causes for client CX challenges.

Prove Yourself Wrong (I Dare You)

The information age has increased the quantity of data to which we have access, and it has expanded our capacity to interpret and consume the resulting information. Much of this information has been created through one form or another of analysis. To understand information more fully, it is helpful to understand basic analysis first.

A primer on basic analysis

Let’s use the standard scientific approach as a framework for understanding analysis. The first step in any analysis is the creation or identification of a question. Next, we establish a hypothesis and determine what tests will need to be run. We run the tests, validate our results, reach our conclusions, and make recommendations. This, in a nutshell, is analytics. The specifics of the question, validity of the tests, and the context of the conclusions all impact our recommendations. We are testing our theory, and our expectation is that we will prove ourselves right.

Validation is essential

What if the reports don’t show what is expected? One of the failings of most analysis practices is that they don’t validate their results. Pass or fail, the analyst will accept the results they get on the first pass. The real value of an analytical result is the ability to repeatedly arrive at an identical conclusion each time the same test is run, showing that we’re on the right path. Whether we prove the hypothesis right or wrong, a common mistake is that we often accept the first conclusion and move forward.


When a hypothesis fails

Upon repeating the test, failure to prove the hypothesis again means we set aside our findings and move to the next question. Not proving the theory we’ve established often brings our analytical efforts to an immediate halt. When we’re wrong, we often ignore the results and either move forward as if we were right (accept our theory despite the results) or move on to something else (ignore the results). We can spend weeks attempting to prove ourselves right and, when we fail, we forget the work we’ve done and move on.

Learning from lack of proof

But what if we didn’t? What if we proved ourselves wrong, and went with those findings instead of ignoring them? When we fail to prove a hypothesis, we often succeed in finding something else. Unexpected findings can have unique value and create new questions. They may lead us to new opportunities, new tests, and new recommendations. We may find ourselves solving key problems we didn’t know we had or breaking into new markets we didn’t know existed. When we prove ourselves wrong, we often find a unique gem of an idea because it is usually completely different than anything we normally would have considered. This is where true analytics, beyond anything else, can make a major difference in our business.

Analytics in action: an example of carrots and drunk driving

Here’s an example of proving ourselves wrong and solving a problem we didn’t know we had. We’ll begin with a question: If people stopped eating carrots, would drunk driving disappear as a social issue? The hypothesis that stems from this question: A causal relationship exists between eating carrots and drunk driving. This means that we believe, based on our initial theory, that if people stopped eating carrots, there would be less drunk driving.

To test this theory let’s look at the data around those who’ve been arrested for drunk driving and determine which and how many ate carrots. To eliminate other possibilities, we’ll also test those who ate fruit, those who ate peanuts, and those who ate steak.

It’s likely that, given the popularity of carrots, a very strong correlation exists between those who ate carrots and those who were arrested for drunk driving, but we fail to prove that eating carrots cause people to drive drunk. We have failed to prove our hypothesis, and after a second attempt we also fail, thus definitively proving ourselves wrong.


Unexpected Data Proves Useful

However, during the testing an interesting data point is captured. People who eat salty peanuts get thirsty and tend to drink more because of the salt. We note that bars provide bowls of salted nuts to get people to buy and drink more beer. Our failure to prove our hypothesis offers a valuable data point that we can now use to make a recommendation—eliminating bar nuts could reduce the amount of beer consumed, the number of people who get drunk in bars, and the number of drunk drivers on the road. Thus, we can achieve our goal through another avenue.

The value of analysis when we prove ourselves wrong

This is what is most often missed when we fail to see the value of proving ourselves wrong. If we had ignored the results of our testing and placed a ban on carrots, we would have had little if any impact on drunk driving. If we had set aside all our work as a complete failure, we would never have considered the bar nuts. By embracing our failure, we identify a new opportunity, a hidden gem buried within, that provides us much better results than banning carrots.

The next time you believe you have the answer, test yourself. Dig deep, be honest, be persistent, and test your results. Be thorough and open-minded enough to accept your verified results. You may find that what you believed is absolutely true, and this will give you confidence to move forward on your planned path. However, you just might prove yourself wrong, and that could open an unexpected new path that is even more valuable. Next time you think you have the answers, prove yourself wrong.

I dare you.

About the Author

Dr. Jim Sullivan, Vice President, Information Systems and Enterprise Applications

Having joined the ResultsCX family in 2020, Dr. Jim brings more than thirty years of experience working in technology, as well as advanced degrees in management and information systems. For the past two decades he has been focused on the effective delivery of actionable intelligence as the cornerstone of information technology programs. His experience includes BPO, consulting, manufacturing, media, and retail industries, as well as support for healthcare and government environments. Dr. Jim is a published author, experienced developer, system technician, and a thought leader in data sciences and governance.


The following are some client improvements driven by analytics:

• Improved client retention via speech analytics and interaction modification

• 32% reduction in overall operational costs via process analytics and benchmark modification

• $10M Cost avoidance via KPI review and analysis and data quality program implementation