Inside Our AI Test Kitchen: How We’re Really Using AI Tools

Avatar photo
Authored byRaeann Bilow

At Cascade Insights, we’ve always sought tools that help us deliver stronger insights more efficiently. For more than a year, we’ve been rigorously testing and integrating AI across our workflow, from feasibility checks and survey writing to visual storytelling and final report creation. Over that time, we’ve seen certain tasks become easier, a few get trickier, and entirely new possibilities emerge thanks to rapidly evolving capabilities.

This has been an ongoing exploration, not a one-off experiment. Along the way, we’ve collected a number of wins, from shaving hours off prep time to unlocking new ways of synthesizing complex findings, while also learning where AI still falls short.

Critically, we only explore AI platforms that meet our strict data security standards. Client data is never fed into tools without proper control, transparency, and confidentiality, allowing us to innovate confidently without risking sensitive information.

Our internal #aitools Slack channel has become the test kitchen for exploring what AI can (and can’t) do. It’s where our team shares experiments, breakthroughs, and challenges as they incorporate new models and toolkits into their work.

Here’s a behind-the-scenes look at how we’re weaving AI into our workflows. While we’ve explored many different applications, these are the use cases we’ve found to provide the most value and make the biggest difference.

1. Interview Summarization & Thematic Analysis

Summarizing large volumes of qualitative data is where AI can offer enormous lift.

  • Rapid Summarization: ChatGPT condensed 21 IDIs into slide-ready content in hours.
  • Theme Extraction: CoLoop and NotebookLM tagged verbatims, extracted themes, and surfaced early recommendations.
  • Comparative Analysis: AI identified gaps and shifts across multiple interview rounds.

Takeaway: NotebookLM is a standout for synthesis, while CoLoop accelerates theme-tagging. Manual review is essential to ensure quotes are accurate and extractions maintain their integrity.

2. Deck Creation & Visual Ideation

We tested AI for a range of visual needs, from initial slide templates to full image generation and branded deck builds.

  • Slide & Deck Creation: Gamma and ChatGPT converted outlines into branded decks, with Gamma excelling in layout and color customization. AI helped structure content, suggest layouts, and even generate draft speaker notes.
  • Image & Infographic Generation: Firefly, Ideogram, and Canva Magic Studio produced concept images, icons, and simple infographics. Simple prompts generally yielded clean, usable results; complex scenes or multi-step data visuals sometimes suffered from odd artifacts, inconsistent styling, or gibberish labels. Infographics often worked best when generated in parts (i.e., icons + text separately) and then composed in tools like Illustrator or Figma.
  • Refinement: Final polishing in Photoshop, Illustrator, or Canva was needed for brand consistency and clarity.
  • Future Video Potential: We’ve experimented with a variety of new AI video tools (i.e., Runway, Veo3, and emerging generative video features in Adobe) and see strong potential to extend similar workflows to explainer videos, animated infographics, and short-form social content.

Takeaway: AI is excellent for breaking creative blocks, speeding iteration, and producing strong first drafts. Human refinement remains essential for brand alignment, clarity, and polish.

3. Writing Support & Report Drafting

AI is now standard in drafting proposals, reports, and professional communications.

  • Drafting & Structuring: ChatGPT co-authored white paper intros, summarized transcripts, and refined report sections.
  • Tone Polishing: Claude and Gmail’s AI tools offered quick tone adjustments, each with distinct styles.

Takeaway: ChatGPT helps structure and refine content. Claude adds depth. Custom GPTs show potential for standardizing narrative elements. Human oversight remains essential for ensuring flow, tone consistency, and final polish.

4. Structuring Open-Ends & Quant Data

AI can structure both open-ended and quantitative data (though limitations remain).

  • Open-Ends: Claude categorized 157 responses with multi-tag logic and counts. ChatGPT created code frames and thematic tallies but sometimes missed subtle overlaps. CoLoop’s quant tools allowed only one tag per response; gaps often needed manual QA or tools like CodeIt.
  • Quant Data: Claude and ChatGPT handled basic charts and single-select summaries well. Multi-select and matrix questions were more error-prone, requiring fact-checking to prevent hallucinated labels or incorrect stats.

Takeaway: AI is very helpful for exploratory or draft-level quant analysis; final numbers required validation and domain expertise.

5. Survey Writing, KIQs & Question Design

AI is especially effective at drafting and refining research questions.

  • Survey & Guide Creation: Claude consistently produced thoughtful, nuanced questions for surveys and moderator guides. 
  • Brainstorming KIQs: ChatGPT, Claude, and Gemini helped generate key intelligence questions from call transcripts.
  • Question Reframing for Different Formats: One project involved adapting preliminary research findings into buyer persona questions with ChatGPT. Another saw an in-depth interview guide transformed into a focus group script with activities and follow-ups.

Takeaway: Claude is particularly effective at phrasing sophisticated and nuanced questions. ChatGPT and Gemini are strong for brainstorming, reformatting, and adapting questions to different research modes. Expert review remains essential to ensure questions are accurate, unbiased, and aligned with the research objectives.

6. Automation & Workflow Hacks

Some team members took AI a step further, using it to automate workflows:

  • Custom Workflows: A Zapier + Lindy setup pushed Fathom transcripts to Google Sheets, summarized calls, and prepped persona-specific follow-ups.
  • CRM Analysis: Claude prioritized outreach based on engagement and spend history.
  • Unexpected Perks: ChatGPT merged PDFs, saving manual effort.

Takeaway: AI-driven automation delivers serious time savings but requires the right use case and technical setup.

7. Deep Research & Project Kickoffs

AI deep research features are giving teams a faster start on market analysis, competitive tracking, and project preparation.

  • OSINT Research: Team members used Deep Research to investigate competitor positioning, pricing, and product updates. ChatGPT adhered closely to prompts, provided structured control, and favored concise, U.S.-only sources. Gemini generated wide-lens narratives and summaries but often drew from mixed-quality sources. Perplexity Pro accelerated discovery with fast source tracing and citations, though it sometimes ignored geographic limits, paraphrased quotes as direct speech, or linked to company blogs presented as independent reviews.
  • Project Kickoff Preparation: By quickly surfacing competitors’ positioning, updated pricing, and recent announcements, these tools reduced ramp-up time and allowed analysts to start discovery calls and secondary research with sharper context.

Takeaway: AI deep research features are helping to accelerate secondary research and prepare for project kickoffs. ChatGPT delivers strong, structured insights, while Gemini and Perplexity can add color. Expert context is essential to verify accuracy, filter out low-quality sources, and ensure findings are reliable.

8. Moderated Interviews 

The team tested AI interviewer platforms like Strella, Versive, and Listen Labs:

  • For simple, structured B2C topics, these platforms performed reasonably well. They could ask predefined questions, stay on script, and maintain a neutral tone throughout. In scenarios where the goal was straightforward data collection, such as product preferences or usability feedback, their ability to stay consistent and efficient was a clear strength.
  • However, for complex B2B interviews, performance fell short. AI moderators struggled to:
    • Follow up meaningfully based on nuanced or jargon-heavy answers.
    • Adjust pacing in real-time, sometimes rushing responses or lingering awkwardly.
    • Recognize subtle cues that a human interviewer would use to pivot or dig deeper, especially around sensitive or high-stakes topics.

Despite these limitations, AI moderators showed promise as training aids, helping new team members rehearse question sets, test different phrasing, or simulate edge-case scenarios. They also offered value for internal dry runs before live interviews with C-suite participants or technical stakeholders.

Takeaway: AI moderators aren’t ready to replace human researchers in B2B settings. The nuance, improvisation, and context-awareness required for high-quality qualitative interviews still demand a human touch. That said, there’s real potential for AI to support behind-the-scenes tasks like recruitment and prep, freeing up researchers to focus on the conversations that matter.

AI Takeaways: What We’ve Learned

After about a year of teamwide experimentation, a few consistent themes have emerged:

AI Speeds Up the First Draft

Whether it’s summarizing 20 interviews or building the first draft of a deck, AI dramatically reduces time spent on rote tasks, freeing our consultants to focus on interpretation and storytelling.

Tool Choice Matters
  • ChatGPT: Best for summarizing, writing, and structured outputs.
  • Claude: Excellent for deep reasoning and quantifying open ends.
  • NotebookLM / CoLoop: Great for transcript handling and synthesis.
  • Gamma / Canva / Firefly: Useful for design inspiration and layout scaffolding.
Human Oversight Is Non-Negotiable

AI is a co-pilot, not an autopilot. It needs critical thinking, subject-matter knowledge, and ethical oversight to deliver trustworthy insights. Across the board, human oversight remains critical for refining tone, eliminating bias, and aligning question design with research goals.

Continuous Experimentation is Key

AI capabilities change almost daily. Our ongoing exploration over the past year has uncovered new AI capabilities that simplify complex tasks and lead to more efficient workflows. We’ll continue to adapt and evolve our methods as the technology advances.

AI’s Role in Research: We’re Just Getting Started

From refining custom GPTs to experimenting with ChatGPT-4.5’s image capabilities, we’ve only scratched the surface of what’s possible, and we’re moving fast to stay ahead.

Our team continues to stretch the boundaries of AI’s role in research. We’re now exploring:

  • Agentic workflows: Running full research tasks autonomously, like parsing and summarizing thousands of comments to extract sentiment, themes, and contradictions across a dataset too large for manual analysis.
  • AI-powered tone and sentiment detection: Using AI to identify tone, attitude, and implied emotion across transcripts and open-ends, while preserving nuance.
  • Mid-Coding Passes (MCPs): Testing AI-assisted workflows for early reads on patterns within qualitative data, speeding time to insight without losing the depth that human analysts bring.
  • Layered pipelines: Chaining tools together. For example, using one GPT to extract structured fields, another to validate information against original sources and provide citations, and a third to generate executive-ready summaries or slide content.
  • AI as a research co-pilot: Not just writing or formatting, but identifying gaps in a discussion guide, flagging conflicting data, or proposing new directions mid-project.

We’re not stopping at enhancement, we’re actively redefining how work gets done. AI isn’t just saving time; it’s unlocking entirely new workflows. As the tools evolve, so will our approach, always guided by strategic thinking, methodological rigor, and data security.

So we’ll keep testing. Keep iterating. And keep sharing what actually works. And as the tools continue to evolve, so will we.

Home » B2B Market Research Blog » Inside Our AI Test Kitchen: How We’re Really Using AI Tools
Share this entry

Get in Touch

"*" indicates required fields

Name*
Cascade Insights will never share your information with third parties. View our privacy policy.
This field is for validation purposes and should be left unchanged.