A growing number of job candidates are using AI tools like deepfake technology to create fake identities, posing significant challenges for companies during the hiring process.
In one recent high-profile case, Pindrop Security, a voice authentication startup, discovered that a candidate named Ivan, who seemed perfect for a senior engineering role, was actually using deepfake software to fabricate his appearance during a video interview.
The fraud was uncovered when the recruiter’s suspicions were raised by the slight misalignment between Ivan’s facial expressions and his words, according to CNBC.
Pindrop’s CEO, Vijay Balasubramaniyan, warned that the rise of generative AI is blurring the lines between human and machine, with scammers now utilizing fake identities, photo IDs, fabricated resumes, and even AI-generated voices to land jobs.
This trend is growing quickly too, with Gartner predicting that by 2028, one in four job candidates will be fake due to AI manipulation.
Cybersecurity and tech firms have seen a surge in these fraudulent applicants, particularly for remote roles that allow bad actors to disguise their locations.
Experts say that while some impostors simply aim to collect a salary, others could pose serious security threats by stealing sensitive data, installing malware, or even conducting espionage.
In fact, a recent Justice Department case revealed that North Korean-linked workers had infiltrated U.S. companies to fund the regime’s weapons program, demonstrating the potential scale of the problem.
This trend is not isolated to North Korea. Experts report a widening pool of fake job seekers from Russia, China, Malaysia, and South Korea, making it increasingly difficult for employers to distinguish genuine applicants from impostors.
The sophistication of deepfake technology and stolen identities has led to cases where fraudulent workers not only pass background checks but perform their roles well, leaving employers unaware until something goes wrong.
For companies like Pindrop, fighting these threats requires advanced technologies, such as video authentication systems, to spot deepfakes and identify fraudsters.
Balasubramaniyan noted that as deepfake quality improves, it will become harder for companies to rely on traditional methods like video interviews or background checks to identify fakes.
With cybercriminals increasingly targeting the hiring process, employers must adopt stronger verification tools and stay vigilant to the growing risk of AI-powered fraud.