How AI is Enabling Fraudulent Respondents
7/30/25 / David Kennedy
For as long as there has been surveying, there has probably been survey fraud. This may seem weird to an outside observer—who on earth would want to take surveys over and over again? There are numerous reasons, from wanting to influence the outcome to wanting to collect the incentives.
Focusing specifically on online survey fraud, similar to other online security threats, it’s been a battle of technology. The fraudsters find a new gap in the defenses. The defenders identify patterns and put up new defenses. Repeat for eternity. But with AI now being used to take surveys, the playing field has fundamentally shifted. AI has created a real leap in what is possible for fraudsters to do, and our approaches to data integrity need to evolve at pace.

The Continuum of Fraudsters
First, a quick summary of how fraudulent responses may show up in survey data:
- An individual bad actor. Perhaps the oldest and most basic of fraudulent respondents, this is typically a person who misrepresents themselves to try and take as many surveys as possible. They’ll learn what screening criteria often will be successful to get into a survey (e.g., young males, high-income earners, doctors) and respond accordingly. Their reach is typically limited to the number of surveys they can complete themselves.
- A group of bad actors. Similar to an individual, but a more organized group working together to find and participate in surveys, sharing what they learn to gain access to more surveys.
- Automated bots. Once you know how to game the survey system you’ll want to do it quicker and more often. Enter the bots that you program to take surveys on your behalf. These are pretty rudimentary and more easily detectable if you know what to look for.
- Artificial intelligence. Think of it as the smart bot that more easily gets around the common checks you may throw at it. Luckily, there are still some signs we can watch out for.
The Old Tricks Don’t Work
Many of our old defenses, specifically tailored to bad actors and bots, are no longer up to the challenge of AI, at least not on their own. These tactics may still be used in conjunction with newer defenses, but as AI adoption increases in this space, their effectiveness will likely decrease. Some of these include:
- Checking for speeding. Individuals who are just trying to complete the survey as quickly as possible may race through it, as will bots that can answer quicker than any human. By adding a timer to the survey, we can detect this, but AI can be programmed more easily to go slower and behave more human-like.
- Attention check questions. One trick often deployed is to tell the respondent to check the third option, select a certain shape, or the animal in the list of options. These work when someone, or a bot, is flying through a survey and indiscriminately checking answers without reading the question. AI will read the question and do as instructed.
- Hidden questions. Another more advanced trick is to program hidden questions (sometimes called honeypots) that an automated bot would see, as it’s reading the code of the survey not the actual survey on screen. If that question is answered, a real person couldn’t have taken the survey. AI, since it’s reading the screen just like a person, will miss these.
- CAPTCHA or other image recognition. Similarly, image recognition (e.g., what animal is in the photo) checks will fail as AI will probably identify it better than humans.
New Tactics for Detecting AI
AI can be very good at consistency. Program it to respond as a young man and it will answer questions as a young man may, including marking demographics appropriately. If you tell it to take the same survey over and over as a young man, it will respond in a similar, though not identical fashion.
It’s also very good at all the typical types of survey questions—grids, single response, open-ends, etc.
This is a quickly evolving space and our defense will need to evolve at pace. (So, if you are reading this in the future, these may already be outdated.) But below are some tactics to consider.
- Open-ended responses that are too perfect. AI currently tends to reply in complete, grammatically correct sentences. A very unhuman trait. Now, some people will type complete sentences and be well spoken, but it can be used as a flag to take a closer look at that respondent.
- Answering very technical questions. Asking a survey question about some arcane programming language or bit of historical knowledge will likely produce a very accurate response. Again, maybe someone will know the answer, but not most people. AI will likely have “read” a book on it in its training, and it can’t seem to not answer a question it knows the answer to.
- AI isn’t that creative. In one test project conducted by other researchers, when asked its favorite ice cream flavor, 96% of the time it said vanilla or chocolate. Clearly inhuman. 🙂
- IP addresses. We screen for this now, though smarter bad actors know to use VPNs to mask their activity better. AI will often show the IP address of data centers and at the very least, not mask its location.
- Zip codes. Interestingly, in one test it consistently said its zip code was 90210, which of course is perhaps the most famous zip code in America. At least that model didn’t understand the relationship between zip code and geography.
- Browser meta data. This gets a little more technical, but in the same test, the browser type, screen resolution, etc. were all the same. They also used headless browsers.
- Time zones. Similarly, time zones were all the same.
One cautionary note with any data checks—humans fail tests, too. It’s one of the things that makes us human. Therefore, we often set thresholds for the number or type of checks a respondent must fail before being kicked out of our sample.
Other Strategies
All of these points relate most to online panels. That is, companies that pre-recruit potential research respondents that we then can tap into, screen, and ask to participate in our work. This is where we see most of the issues with fraudulent respondents, but they can show up in other closed samples, such as a customer list. Even on off-panel surveys, a certain level of vigilance should be maintained.
There are some newer panels out there that are working to validate respondents, though they are smaller in scale and reach right now. These panels are using some of the same identity technologies that financial service companies use to make sure you are who you say you are. Hopefully these panel options grow in scale and ability.
Beyond online surveys, remember that there are other data collection methods out there. We still rely on mail recruiting, where we send invites to known addresses, or even in-person intercepts. Perhaps as we continue to question what is “real” online, we’ll come full circle in research and return to in-person surveying. What better way to verify if someone is human than to do the survey in-person. At least until some AI droid is invented.
