CVs, Receipts and Red Flags: How AI is Helping Candidates Cheat the System

Artificial Intelligence has landed in the workplace — but not always in the ways we expected.

While headlines focus on how AI might replace jobs, a more immediate threat is quietly growing: using AI to lie their way into jobs and defraud employers. From AI-generated CVs to fake receipts and even deepfaked interviews, some individuals are turning cutting-edge tools into weapons of deception.

For employers, the risks are real — and rising. A 2024 study by the recruitment platform HireVue found a 25% increase in suspicious interview activity involving voice or video manipulation. Meanwhile, expense management platform Ramp recently launched a tool specifically to detect AI-generated receipts, after spotting a surge in fraudulent claims.

It’s no longer a question of if AI will be used to defraud businesses — it’s already happening. The question is: what can you do to stay ahead?

How AI-Driven Job Fraud Is Already Happening

It’s tempting to think of AI-generated CVs and fake documents as a future problem — but they’re already being used to mislead employers right now. And the tools are easier to access than ever.

What once required a professional fraudster can now be created in seconds with free or low-cost AI tools. From chatbots that generate convincing CVs to image generators that produce fake receipts or IDs, the barrier to entry has all but disappeared.

Here’s how it’s playing out across the hiring process:

This receipt was created using AI-powered image generator DALL-E with a single prompt

  • Fake CVs and cover letters: AI tools like ChatGPT, Claude, and Gemini can produce CVs filled with convincing-but-fabricated work history and qualifications. These documents are often polished enough to pass initial screening — especially when checks are light. A 2023 report by Fleximize noted the growing difficulty employers face in spotting fraudulent applications written by AI.

  • Deepfake interviews: In remote recruitment, candidates have been caught using voice-changers or AI-generated avatars to impersonate more qualified individuals. A 2024 study by HireVue reported a 25% increase in suspicious interview activity involving voice or video manipulation.

  • Synthetic candidates: Some individuals go even further, creating entirely fake digital identities. In late 2023, cybersecurity researchers flagged a rise in "synthetic applicants" — complete with AI-generated LinkedIn profiles, faked certificates, and proxy interview setups.

  • Faked receipts and expense fraud: It’s not just the hiring process at risk. Generative image tools like Midjourney and DALL·E can now create photo-realistic receipts and invoices. In 2024, US-based expense platform Ramp released a tool designed to detect AI-generated receipts after seeing a rise in suspicious submissions.

  • Doctored documents: Payslips, ID cards, and utility bills can all be edited with AI-based design tools, making it easier to forge documentation that appears authentic. This poses a serious compliance risk in sectors that rely on rapid onboarding — especially care, logistics, and education.

The consequences? Financial losses through fraud, data breaches due to unauthorised access, and reputational harm — especially if a falsified hire is placed in a position of trust.

This isn’t theoretical. It’s happening right now — and if your screening process hasn’t adapted, your business could be next.

Why Traditional Screening Isn’t Enough Anymore

Most hiring processes are still built around trust — but trust alone no longer cuts it.

In a world where anyone can generate a believable CV or mock up a glowing reference letter with AI, relying on surface-level checks is becoming a liability. Many businesses still rely on manual reviews, paper-based documents, and the assumption that candidates are who they say they are.

But AI-generated fraud is designed to exploit exactly that — the grey areas.

Here’s where traditional screening often falls short:

  • CV checks rarely include cross-verifying past employment or checking for recycled job descriptions.

  • Reference letters are often accepted at face value, despite being one of the easiest things to fake with AI.

  • ID checks may involve little more than glancing at a passport photo, especially in sectors with high-volume hiring or tight turnaround times.

  • Remote interviews can make it easier for candidates to misrepresent themselves — especially when there's no in-person interaction to raise red flags.

And while none of these issues are new, AI makes them faster, cheaper, and harder to spot. A single candidate can now apply to hundreds of roles using auto-filled AI applications, boosting their chances of slipping through.

Put simply: the playing field has changed. And unless your vetting process has changed with it, your business could be at risk — often without even knowing it.

What Employers Can Do: Practical Steps for Defence

The good news? You don’t need to overhaul your entire hiring process to stay one step ahead. But you do need to evolve it.

Here are some steps businesses can take to reduce the risk of being duped by AI-enabled fraud:

Strengthen Identity Checks

Move beyond the basic passport scan. Use digital ID verification tools that combine document analysis, facial recognition, and live selfie checks to confirm a candidate’s identity — especially in remote hiring. This adds a layer of security that AI alone struggles to fake convincingly.

Go Deeper on References

Don’t rely on emailed reference letters. Make direct contact with referees by phone or video, and confirm employment history via trusted third-party screening partners. AI can fake a letter, but it can’t hold a convincing conversation.

Audit Expense Processes

If you allow employee reimbursements, introduce software that checks for duplicates, flags altered images, and tracks suspicious patterns. Periodic audits, even on a randomised basis, can also act as a deterrent.

Train Hiring Managers to Spot Red Flags

Give your team the knowledge to recognise signs of AI misuse: too-perfect CVs, suspicious delays during interviews, or documents with subtle formatting glitches. If something feels “off,” it probably is.

Use Tools That Keep Up with the Threat

From fraud detection software to secure background screening services, there’s a growing range of solutions built specifically to combat modern risks. Look for providers who are updating their tech as fast as the fraudsters are.

The rise of AI-generated fraud isn’t something businesses can afford to ignore. From fake CVs to falsified expense claims, the threats are already here, and they’re evolving fast. But with the right checks in place, you can stay ahead.

At Personnel Checks, we help businesses make safer recruitment decisions — decisions built on verified identities, genuine references, and real qualifications. Our screening services are designed to give you confidence, not just compliance, whether you’re hiring for frontline care or head office roles.

🔐 Want to protect your business from AI-enabled fraud?

Talk to us about identity checks and screening solutions that go deeper than a CV. Get in touch or explore our pre-employment services here.

Previous
Previous

Exploitation of international care staff a growing problem

Next
Next

The Benefits of Digitising Your Employee Onboarding Process