Catfish, Bots and Spam: How TA Can Push Back Against Candidate Fraud

We spent years making it easy to apply, and now we’re drowning in the consequences. How do we shut down the fraudsters while giving serious talent a real shot? 

The application used to be a promise between an employer offering a job and a candidate genuinely interested in doing it. But for most recruiters today, the process has started to feel like a DDoS attack. After a decade of stripping away every bit of friction to create the “one-clickexperience, we’ve opened the door to AI-powered autoapplies, bot-swarms and deepfakes, where an application doesn’t even guarantee a candidate’s existence, let alone their interest. It has effectively turned the top of the hiring funnel into a legitimate corporate security risk.

To make sense of this arms race, we sat down for a roundtable with two leaders who have spent their careers at the messy intersection of hiring, tech and security. Andrew Gadomski, MD of Aspen Analytics, brings the InfoSec perspective. Joining him was Jason Roberts, Sr VP of Technology and Analytics at Cielo Talent, who is currently leading the shift toward agentic operations and transformation. Our topicthe resume is perfect, the interview was seamless, but is the person on the other side of the screen actually who they say they are?

What Actually Counts as Candidate Fraud?

Well, it’s not adding a fancy title the candidate never held or rounding their GPA up a few points. That’s the old-school stuff—annoying, but containable. Today’s fraud has gone high-tech, and it’s moving at the speed of an LLM.

Gadomski suggests we stop thinking about candidates “massaging the truth” and start thinking in terms of information security. He breaks fraud into three tiers of messy data currently clogging the pipes:

  • Misinformation: This is the “oops” tier. It’s when a candidate asks ChatGPT to polish their resume and the bot decides they also graduated from MIT. The candidate hits submit without checking, and the hallucination becomes a lie by proxy.
  • Disinformation: This is a deliberate attempt to game the system using stolen work samples, fake degrees or resume padding designed specifically to trigger every keyword in your ATS.
  • Malinformation: This is the one that should keep the C-suite up at night. These are bad actors—often state-sponsored or organized criminal groups—looking for a backdoor into your physical and digital infrastructure. They aren’t trying to get a job; they’re trying to get access to a secure network.

The Four Questions Every Recruiter Needs to Ask

At a practical level, you’re usually dealing with four separate questions about each candidate:

  1. Are they who they say they are?
  2. Are they where they say they are?
  3. Have they really done what they claim?
  4. Why are they actually here?

Who and where is this candidate?

The first two are your classic catfish problems. It might look like proxy interviewing, where one person aces the Zoom call and a completely different person shows up on day one. Or it could be a candidate presenting as US-based, US-eligible talent, only for IP data to later show they were never in the country in the first place. That’s if you’re dealing with a novice fraudster—sophisticated cyber actors use residential proxies to make it look like they’re sitting in a suburb of Chicago when they’re actually thousands of miles away.

Some employers have tried to fight back by requiring candidates to hold their IDs up to the camera during a live call, but even that is becoming vulnerable to real-time deepfake overlays. As Gadomski noted, we have to start looking at the “provenance” of an identity long before we let someone into our internal systems. Where did this application come from? Was it submitted via a known bot farm IP? Is the email address 10 years old or 10 minutes old? These are questions an IT department asks every day, but recruiters are only just beginning to learn how to ask them.

Do they have the skills?

The third is familiar territory: responsibilities that grew in the telling, employers that never existed, skills added just to match the requisition. Recruiters often think that AI embellishment is fraud. But if anything, there’s a strong argument that a candidate who isn’t using AI to customize their resume is not ready for a modern work environment. With some data suggesting only 0.5% of applicants are actually hired, the logical (if desperate) response is to apply to 1,000 jobs, with an AI assist, to guarantee a hit.

The real problem is when the human is removed from the loop entirely. High-volume bots can now “swipe and apply” to hundreds of jobs in seconds such that, by the time that candidate reaches an interview, they can’t explain the experience listed on their own resume because they’ve outsourced their professional identity to an algorithm.

We also have new tools hitting the market that are capable of sitting invisibly between the candidate’s screen and what the interviewer can see. They listen to questions in real time and feed suggested answers back to the candidate in a layer that never shows up on the shared screen. From the interviewer’s perspective, the candidate is responding naturally, but they’re not—they’re actually reading from a dynamic answer key, which is clearly cheating. It matters because it undermines the one part of the process that people still assume is “real.” 

What’s the intention?

But it’s the fourth question—why are they actually here?—that talent teams are often least prepared for, because it’s about understanding the intention of the person. Assessing intent means looking for red flags that you can’t pick up in a background check. The panel pointed to a recent example where someone with a legit ID made it through hiring and then caused real‑world damage, up to and including setting a warehouse on fire, which is a long way from “white lie on a resume” territory.

Once you separate those buckets clearly, you realize a candidate using AI to tighten up a resume is not the same as a candidate using a deepfake to get through a video ID check. Treat both as identical and you end up over‑reacting to honest candidates who are simply using the tools on offer, while under‑reacting to the actors who could actually hurt the business.

Is Analog the Way to Push Back? 

You might want to sit down for this one. After decades of relentless digital advancement, the answer to high-tech fraud might just be a fax machine.

Yes, really. It sounds like a joke, but the roundtable panel pointed out that in a world of high-speed digital deception, the most effective solutions are becoming “old school” by necessity. Take Google and Amazon; even the tech monoliths still use physical mail to verify that a business actually exists before listing its profile. Recruiting is beginning to use that same logic.

The specific “analog” examples shared by the panel feel like a return to 1995:

  • Sending a physical postcard with a unique verification code to a candidate’s home address. If they can’t provide the code, they aren’t where they say they are.
  • Requiring a verification call to a registered physical landline, bypassing the disposable digital numbers favored by bot farms.
  • Requiring a candidate to walk into a local branch office, store, hotel etc—even if they’ll eventually work remotely—just to verify they are a biological human with a matching ID.

So after years of stripping the friction from the application process, we’re now talking about re‑introducing a few healthy hurdles where the risk justifies it. 

Three Fixes Possibly Hiding in Your ATS

When the Candidate Experience Awards were created 15 years ago, the message was, we have to improve the candidate experience. We have to make this easier, we have to make this better, we have to cut the friction. As Gadomski says, “We got everything we wanted! We got the one click apply and swipe right and all the stuff. Now the question is, why haven’t we made changes as recruiting organizations to compensate for that?”

Because the truth is that some of the most effective fixes are already sitting inside your ATS config screen, and almost nobody is using them.

  • Set up automatic shut‑off. If you only need to hire one person, do you really need 5,000 applications? Limiting the pool to the first 200 qualified candidates forces a “first-come, first-served” reality that discourages bot spamming.
  • Limit candidates to one active application at a time. If someone applies for Role A at noon and Role B at 1pm, keep only Role A active until it’s dispositioned. That makes it harder for AI tools to flood the ATS with duplicate applications.
  • Close and reopen contaminated jobs. If a posting is flooded with questionable applicants, you don’t have to keep grinding away at it just to protect your time‑to‑fill metric. The panel’s take is that you close it, tell candidates you didn’t get the applicants you needed, and reopen a clean version. Time‑to‑fill should measure the distance between opening a job and an accepted offer; if there was no offer, it doesn’t belong in the average.

Where AI and Agents Should Actually Fit

For all the focus on bots and fraud, the panel wasn’t calling for a return to paper files and manual screening. Quite the opposite. There was a lot of optimism about where “agentic” models go next, as long as they’re used in the right places.

On the employer side, one example was AI voice agents that call applicants back right after they submit a short application. These agents can explain the job, outline the work, salary and benefits, and ask structured questions over a 20–25 minute call. If the person loses interest once they hear the reality, they can simply hang up. If they stay on the line and complete the conversation, you’ve learned something useful about their intent. Used well, AI voice agents can stop the “spamalition” and solve the problem of ghostingstill candidates’ biggest gripe without punishing genuine candidates.

On the candidate side, the most ambitious idea was a genuine candidate agent that behaves more like a recruiting coach. Instead of spamming anything that moves, it would hone in on roles that line up with the candidate’s skills and the direction they want to take their career. The agent would handle the searching and applying, but under clear rules: no invented skills, no fabricated employers, no hallucinated career history just to bump up interview counts. Gadomski even suggested this kind of tool could sit inside outplacement packages, funded for a period after a downsizing instead of traditional in‑person services.

The throughline is that none of this removes humans from the parts of hiring that carry real risk. Automation can deal with first contact and basic screening. But the decision to trust someone with access to your systems or your sites still sits with hiring managers and talent leaders. The ideal state is machines talking to machines as much as possible in the messy middle, and humans stepping in at the moments where intent, risk and fit really need to be understood.

Why TA Can’t Own This Alone

As our conversation drew to a close, one thing was obvious: you can’t treat candidate fraud as a neat little TA housekeeping issue. It bleeds straight into cyber risk and physical security, and  that means every CHRO or Head of Talent Acquisition should know their Chief Information Security Officer by first name. Your company likely already pays for tools to verify customer identities or detect fraudulent transactions; those same tools can be applied to the talent pipeline.

Ultimately, stopping candidate fraud means moving from a processing mindset to a protection mindset. You’ll need active defenses against automated deception, moving past “trust but verify” to establish identity, location and intent as the mandatory first steps. “We have to stop measuring how fast we can process a pile of garbage and start measuring how well we can protect the gateway to the organization,” Gadomski says.

Watch the full roundtable with Andrew Gadomski and Jason Roberts here.

About The Author

Subscribe to the Jobsync Quarterly Newsletter