The growing sophistication of so-called deepfake videos that can mimic a specific person is adding to the arsenal of cybercriminals, with almost 5% of fraud attempts last year making use of the artificial intelligence-driven technology.
Deepfakes are now able to imitate real people in real time, meaning that tech-based con artistry has moved from “simple manipulation to full-scale infiltration,” according to the Citi Institute, part of US investment bank Citigroup.
Instances of such fraud have climbed nearly fifty-fold in the past two years in areas such as recruitment and finance, the researchers said, calling for current trust in communication to be questioned and for employees to be trained to “never trust, always verify.”
Efforts at deepfakes, which use AI to generate audio and video imitations of people, “are spreading across recruitment, financial operations and executive impersonation,” the researchers warned.
Citi found that one firm reported almost half of its job applications as fakes, with warnings that around a quarter of applications across all industries could end up being fakes in three years’ time.
“Deepfakes are also being used to impersonate senior executives to green-light multi-million-dollar transfers,” the researchers said, warning that the fakes have been fashioned into “powerful tools of manipulation and fraud, marking a new era in financial crime.”
Deepfakes can not only “convey emotions like joy, anger, empathy and sadness,” Citi explained, but the AI models that churn them out “can now learn and imitate emotional tones from human speech, making these synthetic voices even more convincing.”
Some versions can be configured to change pitch, timbre mid-conversation and to even switch accents, according to Citi.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert












