DIY Research: Who (or What) Is Taking Your Survey?

by:

I get it. Choosing between in-house and outside research is typically a budget conversation.

making-spirits-bright

Research budgets are shrinking, DIY platforms keep getting cheaper, and the output looks the part. Clean decks. Tidy charts. N=500. On paper, it looks like a decision can be made.

But there’s one thing those decks never show you: who (or what) actually answered. And determining whether or not your data is trustworthy is a different conversation entirely.

I’ve been in this long enough to know when data is trying to tell you something. You can’t always name it right away. It’s a pattern that doesn’t sit right, a shift that doesn’t match anything else you’re seeing. You feel it before you can prove it, and you can’t ignore it. When I’ve dug into these issues, I’ve found that the problem rarely starts with the questions we asked: it’s a result of fraudulent or low-quality data.

Fraud is one of the least visible risks in research right now. I’ve heard figures quoted at industry conferences suggesting as much as 30% of online survey data may be fraudulent or low quality. I don’t know whether that number is precisely right or not, but the direction is clear; the AI-driven piece of survey fraud is growing fast and it isn’t always obvious even to experienced teams.

Part of what makes this hard is that the problem isn’t always easily detectable in the responses themselves. For years, our first line of defense was scrutinizing open-ends: nonsensical answers, copy-pasted text, responses with nothing to do with the question. That still matters. But now it’s also about AI, bots, click farms, and professional survey takers.

Today’s fraudulent responses are often coherent and on-topic: AI can be trained not to speed through surveys, bots can be programmed not to straight-line grids, and professional survey takers know how to move through a survey without triggering a fraud alert. The old screening methods weren’t designed for these new types of fraud.

My concern isn’t that DIY tool users lack capability. Most are sharp and serious about quality. The harder problem is infrastructure. Catching today’s fraud requires things most DIY setups don’t have built in: device fingerprinting, behavioral anomaly detection, duplicate suppression, questionnaire traps, real-time field monitoring, and someone with enough fraud-detection experience to recognize when the data starts behaving strangely. That means someone watching constantly while the study is in the field, not just reviewing a file after it closes.

Nobody has this fully solved. It’s an arms race and the threat is evolving faster than the defenses. But there’s a real difference between teams that have built layered controls into their process and those that haven’t, and that gap shows up in the data. And when bad data produces a clean-looking report, no one questions it. The decision gets made. The product launches. The campaign runs. The strategy gets set. And somewhere in those numbers are responses from people, or things, that never represented your customer.

So before your organization fields another study, make sure whoever is running it, whether that’s an internal team or an outside partner, can tell you specifically how they’re handling fraud detection, field monitoring, duplicate suppression, and behavioral checks. If their answers are vague, that’s worth paying attention to. The standards exist: industry bodies like the Insights Association, ESOMAR, AAPOR, and GDQ have raised the bar on data integrity standards for exactly this reason. And to be clear, we're not perfect. This is something our industry is still working through together.

Don’t wait for something in the data not to add up. The right partner can help you avoid fraud from the start.

Author

Deidre Hart

Deirdre Hart

Senior Vice President

Email Deirdre

Deirdre Hart has spent her entire career in market research designing and delivering strategic research for a range of clients across many categories. Her expertise lies within customized quantitative methodologies including extensive experience in new product development; brand, product, and packaging optimization; tracking studies; customer satisfaction; brand extendibility; image research; and experiential on-site research. Deirdre’s commitment to delivering insightful and implementable findings has made her a trusted partner for businesses navigating consumer preferences and market trends.

Copyright © 2026 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.