Bad data: how dodgy research can let down corporate thought leadership

.

November 1, 2025
Three steps to better thought leadership surveys | Exhibit B

We all know, deep down, that corporate thought leadership has an ulterior agenda. Although very often positioning itself as quasi-independent research, usually based on surveys, the results can be far from objective. Findings are cherry-picked to suit an overarching narrative, and results that contradict that message are quietly buried.

Of course, the extent to which thought leadership does this varies. In 20 years of producing thought leadership on behalf of a variety of clients, I have worked with many who really value the independence of research and who are fairly relaxed about findings that do not support their message (up to a point, anyway). I have also worked with others who only want a specific conclusion that supports their goals, and who will ignore pretty much everything else.

In my experience, companies that are more relaxed with nuanced findings (and who show greater respect for the integrity of research) tend to have a better understanding of thought leadership and how it will be consumed. They recognise that audiences will rightly be skeptical of conclusions that exclusively align with a commercial agenda, and that they will place greater trust in those that paint a more complex picture.

There are some general truisms, however. The closer thought leadership gets to the sales agenda, the more likely it is that the client will want the conclusions to support commercial goals. That creates a strong temptation to be selective about the message. After all, no cloud computing company wants a piece of thought leadership whose main conclusion is that 80% of cloud implementations fail.

The problem with some thought leadership is that it can portray itself as quasi-independent and rigorously researched when in reality it isn’t. Yes, money has been spent on going to the market and surveying hundreds of senior executives. Interviews have been conducted with experts who may include academics and independent third parties. Yet the process by which the data is collected and analysed can sometimes be far from scientific. A doorstop thought leadership report based on a survey of many hundreds of senior, relevant respondents can create a semblance of rigour. But scratch beneath the surface and the approach taken is often highly flawed. Greater scale does not equal greater credibility.

The shortcomings of research in thought leadership would not be a major problem if producers were always honest about the intentions. Audiences know that most reports are, usually, a form of marketing, designed to highlight a particular message. But producers are rarely up-front about this. They cling to the illusion of rigour when it is clear to most people that this is just a smokescreen. My advice would be to be transparent about intentions and research limitations. These do not take away from what can still be an interesting read that sparks debate and gets audiences thinking - but it’s usually misleading to suggest that the research conclusions should be taken as watertight. This also undermines credibility and trust.

So, what are some of these limitations and where do companies most often fall down when it comes to the rigour of their research? In my experience, the following are some of the most common.

The dangers of self-reporting. Most thought leadership surveys ask executives about their opinions of a major business trend. For example, how much progress are they making with AI adoption? How effectively are these initiatives leading to real value? And how successfully are they personally managing this area of work in their role?

There are several issues with a self-reporting approach. First, we are all prone to acquiescence bias - we tend to agree with statements that are presented to us, even if our real feelings are more nuanced. Second, we have a tendency towards optimism bias. We think that we are better at things that we really are, and we think that bad things are more likely to happen to other people than us. One famous study asked students whether it was more likely that certain events would happen to them than their classmates. If the event was positive, such as getting a top job, students thought that was more likely to happen to them than others. In case of negative events, such as having an early heart attack, the reverse was true. We see a similar phenomenon in surveys, where often large majorities of respondents will report above average progress or performance, when statistically this cannot be possible.

Confusing correlation with causation. This is probably the biggest issue with corporate thought leadership surveys. It is just too tempting to conclude that performing a certain action - investing in a certain technology, diversity and inclusion or artificial intelligence, for example - leads to strong performance. If you are a company that sells services in any of those areas, of course you want to highlight that connection if you can. So surveys will look for links between those two variables and conclude that one leads to the other. The problem is that this is correlation, rather than causation. Connections between independent variables are often highly spurious and may be attributable to chance, or have no relationship with each other at all. The website Spurious Correlations, produced by Tyler Vigen, has hundreds of hilarious examples: for example, that the popularity of the first name Violet correlates with fossil fuel use in Equatorial Guinea (see chart). In a similar, albeit less extreme way, thought leadership producers can demonstrate correlations between management activities, such as investment, and corporate financial performance, and yet the relationship between the two is spurious at best.

Article content
Source: Tyler Vigen, Spurious Correlations

Pro-social responses. This is a common issue with all surveys. Political pollsters know that survey respondents will often not tell them their true voting intentions. For example, in the run-up to the 2016 US Presidential elections, most polls were confident that Hilary Clinton would win. They were, of course, wrong. We see similar phenomena in thought leadership surveys. If respondents are asked about their intentions to prioritise sustainability and energy transition, a high proportion will overestimate those intentions because they think that is what the polling company wants to hear.

Self-selection bias. Imagine a message lands in your inbox asking you to contribute to a survey on the impact of sustainability on corporate performance. Will you take it, or dispatch the email straight to the trash? A lot will depend on how passionate you are about the topic of sustainability. If you are an enthusiastic supporter, you’ll be more likely to take the survey than someone who holds no strong views. You may also take the survey if you strongly refute the connection between sustainability and want your views to be heard. Either way, the point is that surveys are more likely to be completed by people who have stronger views on a topic. Those with fairly neutral views, who in many cases may comprise the majority of potential respondents, will be less likely to participate, and this leads to an inevitable skew in results.

Spurious precision. Thought leadership producers often pose survey respondents some really difficult questions. For example, they may ask them to provide a number for the change in their EBITDA percentage over the past three years, the percentage change in their headcount or the amount they are currently investing in technology. Sure, a small proportion of respondents will have these kinds of numbers at their fingertips, but most won’t. As a result, they guess. The company producing the thought leadership will faithfully report the aggregate findings and yet many of the underlying data points may be wildly wrong.

This problem is often compounded when we consider some of the other challenges listed here. Often companies will seek out connections between certain actions and corporate performance: does investing in AI lead to higher profitability, for example? Yet here we may encounter several problems at once: weaknesses in self-reporting of both investment and profitability; confusion of correlation with causation if the writers try to establish a link between the two variables; and spurious precision because there’s a good chance that a large proportion of respondents won’t be able accurately to state their profitability (or maybe will confuse gross profit with net margin). Added to that, respondents who consider themselves successful at the input (investing in AI for example) are also likely to consider themselves good at the output (above average profitability). This creates circular logic.

Representation problems. Conducting surveys for thought leadership campaigns is expensive. Companies therefore generally want to keep the number of respondents as low as they can - enough to be credible with the audience and the media, but not so much that they end up having to divert budget away from other crucial parts of a campaign. The problem is that these small samples are very often unrepresentative of the wider population and this can therefore lead to highly skewed results or conclusions that are drawn on very limited data.

Narrative fallacy. This term, coined by the author Nassim Taleb, refers to the tendency to create simple stories to explain complex events. This is very common in thought leadership reports, because producers often want to provide an explanation for certain results. That is commendable, because interpretation is useful to the audience, but it can be problematic if that interpretation makes connections that aren’t really there or ignores the effects of other important factors. The issue is compounded because thought leadership producers are often reminded of the importance of story-telling: in some respects, that’s a good thing if it makes content more engaging and memorable. But it’s an issue if the stories told are just plain wrong or based on a false understanding of essential underlying factors.

So, how do we counter some of these issues? It’s unreasonable to expect most thought leadership producers to conduct research to scientific levels of rigour. The realities of timelines, budgets and marketing calendars make that prohibitively expensive and unfeasible. But there are steps they can take to ensure that they do not fall into any of these common traps. Here are a few pointers:

  1. Be open about research limitations. Explain why the methodology might fall short, and why certain findings still need further research before they can be considered as irrefutable conclusions.
  2. Be aware of the biases that can impact survey findings. When you think that factors such as overconfidence, optimism or confirmation biases could impact findings, consider rephrasing questions or using different formats to try to minimise it. For example, agree/disagree questions can lead to acquiescence bias, where people are more likely to agree than disagree with particular statements. Seek out alternative formats to test the same hypothesis or control for common biases.
  3. Explore different interpretations of findings and be open that alternative explanations may be possible. Simple narratives are appealing but they’re not that helpful if they are wrong.
  4. Don’t confuse correlation and causation. If two findings are correlated, by all means say so, but again be open about the possibility that this could be due to a variety of different factors, including chance. Correlations can be interesting, but they remain hypotheses rather than conclusions.
  5. Don’t expect too much from survey respondents. Avoid questions that are quantitative or that require knowledge of specific performance data or investment levels. Most people will not have this at their fingertips and you want to avoid situations where respondents are forced to guess. If you can replace self-reporting data with real data, do. For example, rather than asking respondents to provide their financial information, collect this separately if you can from publicly available sources, when feasible.
  6. Minimise sampling bias in recruitment. Look at the way that survey questionnaires are positioned. Are they more likely to attract respondents who have strong views about a subject, or particular demographic groups? If so, look at the language and make it more neutral to appeal to a wider sample.
  7. Finally, be humble. If there are limitations, then don’t overstate the rigour of your research and be clear that the findings are directional and should be further tested.

Share this post

Ready to build B2B content with impact? Let’s make it happen.