B2B Quantitative Surveys: Design Them Like You Give A Damn

Tricia Lindsey
Authored byTricia Lindsey
Sean Campbell
Authored bySean Campbell

This is the second blog in our multipart series where we assess what it means to have a high-quality panel, how to develop a great quantitative survey, and where panels and panel firms come from. This article focuses on developing the right research questions to create a strong quantitative survey. To read the first article in the series, click here.

You can’t write an effective B2B quantitative survey if you don’t care. Yet, unfortunately, that’s how many surveys are created today, by folks who don’t give a damn. Today, many surveys business professionals engage with look more like a dumped-out can of hash than something well thought-out, carefully crafted, and full of empathy for the respondent.

Alissa Ehlers, one of our senior consultants, likes to put empathy at the forefront of quantitative survey design.

“Survey design requires a lot of empathy. Because you’re not talking face-to-face with a person, we need to put ourselves in their shoes. Where can we provide clarity in a sea of confusion? Are there any questions that might confuse respondents and skew results?” – Alissa Ehlers, Senior Quantitative Consultant

In sum, to create a valuable quantitative survey, you need to go beyond your needs and your client’s requirements. You also need to focus on the needs of the respondent.

Here are just a few critical things to keep in mind if you want to build surveys that show you care.

Beware Bias

Biases beset companies. One frequent bias occurs when a company fails to understand the words and phrases people use to refer to their products or services.

This kind of bias can torpedo any research effort if it’s not dealt with the right way. Specifically, this type of bias can hurt research efforts like message testing, which helps organizations determine precisely how to say the right things, at the right time, to the right people.

Research Realities: Removing Bias From Questions

Recently a client of ours wanted to conduct a message testing study. Our first step was to conduct a few baseline in-depth interviews with current customers. During these conversations, it became clear that while our client was convinced that users referred to our client’s solution as “collaborative work management,” no real user did.

This is a perfect example of the power of qual before quant. If we had simply taken our client’s bias as a statement of truth, the survey instrument would likely have included many references to the phrase “collaborative work management.” At best, this phrase would have led to confused respondents and at worst led to survey results that didn’t line up with the truth.

The Takeaway:

Using everyday language that everyone understands provides clarity for survey respondents and allows us to share findings that map to real-world experiences with products and services.

Don’t Force a False Choice

If you ask people to answer a true or false question like, ‘I highly value customer support,’ it’s likely that most people will say they value customer support. But, if you put “customer support” in a list of 10 options you’re trying to measure and ask participants to ‘select three that you value the most,’ you’re going to find out more about what the customer really values. And customer support might not break into that top three.

The Takeaway:

Oversimplifying responses for survey takers can diminish the quality of feedback you receive. By forcing your respondents to consider their options and make strong choices, you better understand the truth.

Don’t Shoot With Both Barrels

One of the most common quantitative survey design mistakes involves a double-barreled question.

For example, if we asked respondents a double-barreled question about a “collaborative work management” tool, it might look something like, “Which of the following tools do you like the most and feel is the most user-friendly?”

At first glance, this question appears simple enough. If a researcher asked it during a qualitative interview, it would be trivial to ensure both parts of the question receive a complete answer. Yet, this kind of easy back and forth doesn’t happen in quantitative research. Additionally, this question assumes that user-friendliness is correlated with favorability, which is not guaranteed.

Instead, a researcher might start by asking, “Which of the following tools do you like the most?” Leading with a narrow question leaves room to expand on a respondent’s answer with a more detailed question. A more detailed question looks like, “What attributes do you associate with this tool,” or “Which tool do you feel is the most user-friendly?”

The Takeaway:

You can easily avoid asking double-barreled questions by keeping your questions simple and easy to understand. Don’t worry about cramming two questions into one survey question. Instead, ask discrete questions and follow up appropriately after each one.

Don’t Lead the Witness

If you’ve spent any time watching courtroom drama on TV you might have heard the phrase, “don’t lead the witness.” Researchers have to watch out for the very same thing when they design survey questions, even if the consequences aren’t about life, death, or imprisonment for a crime. However, leading questions do create bias and may in many cases taint the survey results themselves. Let’s walk through an example.

“For what reasons does your organization prioritize migrating on-premise infrastructure to the cloud? Select all that apply.”

A. The cloud is more cost-effective.
B. The cloud provides us better scalability and flexibility.
C. The cloud is more secure.
D. The cloud allows for rapid application prototyping.

Interpreting the results from this leading question might lead to a finding like, “60% of IT decision makers prioritize migration to the cloud due to its cost effectiveness.” Unfortunately, this interpretation is misleading in a few ways.

1. We’re assuming migration to the cloud is a priority to the respondent.

In reality, some organizations simply might be reluctant to move their infrastructure from on-premise servers. On the flip side, an organization may have already undergone a major migration years ago, so the question becomes irrelevant. This leading question pushes respondents who fall into the middle of these two extremes, again tainting their answer.

2. We’re only stating positive reasons to move to the cloud.

Respondents might have reservations about moving to the cloud. By only giving them positive outcomes of a migration, we’re painting a perfect picture that might not be reality.

Here’s an example of a better way to handle the same scenario: “In the next two years, what are the top priority initiatives at your organization? Select up to two statements.”

A. Assess cybersecurity risk
B. Tech stack consolidation
C. Increase automation of manual processes
D. Virtualization
E. Migration of on-premise infrastructure to the cloud
F. Hiring and upskilling IT staff
G. Other – Write in: _____________

Next, we would only show this follow up question to those who select “E” in the previous question, “For what reasons does your organization prioritize migrating on-premise infrastructure to the cloud? Select all that apply.”

A. The cloud is more cost-effective.
B. The cloud provides us better scalability and flexibility.
C. The cloud is more secure.
D. Other – Write in: _____________

Taking this approach keeps us from leading the witness and allows us to drill down appropriately. For example, we might find that companies who prioritize migration are doing so because of cost, but that few organizations (for this survey) prioritize migration to the cloud in the first place.

The Takeaway:

Leading questions impact survey results and affect the accuracy of how one might interpret the data. Rather than asking questions that can sway respondents in a particular direction, remember to keep the language of your questions clear, concise and unbiased.

Fight Against Survey Fatigue

Survey fatigue happens when respondents become uninterested or burnt out from answering so many survey questions. This fatigue results in respondents either not completing the survey, dropping out early, or failing to consider each question thoughtfully.

Respondent fatigue affects more than just the survey results. It affects business decisions. We know all of our clients want to maximize their survey investment. But simply stuffing the survey with more and more questions isn’t a way to do that.

At Cascade Insights, we counsel our clients that quantitative surveys should be no more than 20-questions in length and should take no longer than 15-20 minutes to complete. While there is no ‘right’ number of survey questions to ask, industry standards suggest that surveys that extend past 35-minutes see a greater amount of survey fatigue than short, concise surveys.

Additionally, we avoid putting tough survey questions at the end of the survey. For example, we might ask a multiple choice question like “Which of the vendors in this space are you aware of?” Then, a follow-up question like, “Out of the vendors you selected, which do you consider the market leaders?” These types of questions force respondents to draw connections between responses and create the foundation for compelling insights. So, we try to place these questions early on in the survey when respondents’ minds are fresh.

The Takeaway:

Survey fatigue affects more than just the survey results. Be aware of how long a survey takes respondents to complete and how many questions are included. Remember, if a respondent becomes uninterested or burnt out, the results will reflect this.

Demographics Demand Attention

Most quantitative surveys should include demographic questions. Demographic questions may focus on topics including: a respondents job title, job function, current organization, career history, or age.

Demographic questions let our clients Act With Clarity™. If we simply tell a client that 80% of respondents thought that a brand was well known, we aren’t helping them drive decisions. However, if we say IT leaders were 90% aware of a given brand, but LoB executives were only 41% aware, we can provide a better roadmap for decision making.

Plus, it’s even better if we can say with confidence that CMO’s are 48% aware, CRO’s are 32% aware, and CFO’s are 43% aware. In this scenario, a client can focus their attention on certain audiences who lack awareness — instead of simply peanut buttering their efforts across all different kinds of LoB’s and hoping for the best.

The Takeaway:

Demographic questions help our clients Act With Clarity™. Job title, career history, and job function are just a few questions we like to ask respondents in our B2B quantitative surveys.

Creating Clear Connections

Researchers need to help clients see a clear connection between questions asked and business outcomes addressed. For example, a client might have an important business question such as, “Are we being outpriced?” This may lead to an assumption that knowing exactly how much buyers who chose a competing solution paid will be the only data point that matters.

This type of question is not useful in isolation because customers can utilize a variety of tactics to make their pricing unique. This may include changing the number of licenses purchased, the type of access these licenses have, the length of the agreement, or how payment is handled (monthly, yearly, etc.). Additionally, one client may simply be able to negotiate more effectively than another due to the strength of their purchasing team.

In sum, knowing the price one customer paid, or even the price that many customers paid isn’t that relevant all by itself. Truly compelling pricing research takes those individual price points and marries that data with an understanding of the knobs that the customer turned to change the starting price into the final price.

With this knowledge in hand, we can now counsel our client that they really need to include two areas of inquiry in the survey instrument and not just one. The first focuses on exact prices paid. The second, on the circumstances or tactics the organization used to modify that price. In combination, these two questions help us identify the pricing model that a solution provider might use.

This pricing model-focused insight is much more useful than knowing the individual price points of a small (or even large) set of customers. Simply because organizations compete with each other’s pricing models in B2B more than they compete on individual price points.

The Takeaway:

It’s our job as researchers to help clients make a clear connection between questions asked and business outcomes addressed. Sometimes, clients ask questions that may lead to assumptions. So, it’s our job to understand the root of the question and ask survey questions that will lead to meaningful conclusions.

Respect Your Respondents

Overall, as an industry it’s time for us to give a little R.E.S.P.E.C.T to our respondents. By keeping B2B quantitative surveys tight, short, and focused, we can show that respect. Additionally, we can show respect by including highly relevant questions that show you understand the client’s context.

Not only is showing respect the right thing to do, but making this choice leads to a win for everyone involved. Whether it’s higher response rates, better answers, or more business problems solved, respecting our respondents leads to good outcomes for clients, researchers, and respondents.


With 15 years of experience in B2B tech market research, Cascade Insights can help ensure your research is of the highest quality. Learn more about our B2B market research services including quantitative surveys here.

Special thanks to Alissa Ehlers, Senior Quantitative Consultant for advising on this piece.

Home » B2B Market Research Blog » B2B Quantitative Surveys: Design Them Like You Give A Damn
Share this entry

Get in Touch

"*" indicates required fields

Name*
Cascade Insights will never share your information with third parties. View our privacy policy.
Hidden
This field is for validation purposes and should be left unchanged.