It starts with a simple prompt: “Write me a resume for a marketing analyst with five years of experience in tech.”
In seconds, a generative AI like ChatGPT produces a polished, keyword-optimized resume that ticks every box for an applicant tracking system (ATS). The font is professional. The phrasing is slick. The action verbs are powerful. The bullet points sparkle with strategic ambiguity…enough to sound impressive, vague enough to avoid scrutiny.
That resume is then fed into a machine; yet another AI, this one trained to screen and rank applications. It scans for the right combination of skills, experience, and phrasing.
If it likes what it sees, it might assign the resume a score of 87 out of 100. That’s high enough to get shortlisted for a first-round interview — possibly with a human, though maybe not yet.
And so, the dance continues. AI writing resumes. AI reading resumes. Both optimizing for each other in an invisible duel. The human, meanwhile, waits in the wings, often unaware that their job prospects are increasingly shaped not by personal connection or real-world relevance, but by algorithmic compatibility.
It’s efficient. It’s scalable.
It’s also deeply unsettling.
A Race to the Middle
The use of AI on both sides of the hiring processes has moved far beyond sporadic: 83% of companies said they would use AI to review resumes in 2025, and 65% of job candidates have been using AI in the application process.
What this means is simple: A growing number of job seekers are not writing resumes for people; they’re writing them for machines. But if machines are the intended audience, then what happens to authenticity? To creativity? To the idiosyncrasies that make a candidate — and a human being — memorable?
The system rewards sameness. AI-generated resumes often follow a near-identical structure. They prioritize the most commonly searched keywords. They rely on templates that have been optimized for machine readability. Ironically, the more people use AI to “stand out,” the more they end up sounding the same.
Ghosts in the Machine
For job seekers, the temptation is clear. Crafting a resume can be stressful, especially when studies show recruiters spend an average of just 7.4 seconds scanning each one.
AI offers a solution to that anxiety. It promises to “beat the bots” and get your resume past the digital gatekeepers. It even offers to tailor each resume to the job description — in milliseconds.
But something is lost in the process. Perhaps it’s the sense of ownership, or the subtle emotional nuances that come from describing your own journey in your own voice. When a resume is written by AI, it might say all the right things, but does it mean any of them?
And when the employer’s AI system reads that resume, is it really evaluating the person behind it, or just scanning for pattern compliance?
What we’re left with is a strange kind of professional ghosting: resumes not written by humans, not read by humans, and not evaluated with the complexity that human intuition can offer.
A False Sense of Meritocracy
One of the great promises of AI in hiring was fairness. Machines, we were told, could reduce bias. They wouldn’t care about a candidate’s name, gender, or zip code — only qualifications.
But real-world studies have shown otherwise.
In 2023, the Equal Employment Opportunity Commission (EEOC) raised concerns about algorithmic hiring bias. In 2018 Amazon had to scrap its AI recruiting tool after it was found to discriminate against female candidates. Other systems have been shown to rank candidates based on factors as superficial as word choice or resume length — proxies that have little to do with ability.
In effect, AI often mirrors the biases embedded in its training data. It can also punish applicants for being too creative, too verbose, or even too honest, all of which can lower algorithmic scores.
The Human Cost
Efficiency is a seductive goal. It’s easy to see why companies love AI screening: faster decisions, lower costs, fewer resources spent on hiring. The average corporate job opening receives 250 resumes, according to Glassdoor.
AI can process them in minutes, but at what cost?
Candidates report feeling alienated and frustrated, unsure of whether their applications are being read by anyone at all. Ghosting is rampant. Feedback is rare. The process feels cold, impersonal, and disheartening.
The irony is that companies say they want creative, empathetic, emotionally intelligent employees, yet the hiring process seems like it is increasingly designed to filter out precisely those traits.
Rehumanizing the Hiring Process
We are not arguing for a return to paper resumes or fax machines; technology has its place, but perhaps the pendulum has swung too far.
If we want workplaces to be truly human-centric, then hiring must reflect that. That means:
- Reintroducing human reviewers at key stages of the process, even if just for a sample set.
- Redesigning applicant tracking systems to prioritize diversity of thought, not just keyword hits.
- Being transparent with candidates about how their resumes are evaluated and what tools are used.
- Encouraging applicants to share personal stories, motivations, and projects, not just tidy bullet points.
The Paradox of Progress
We built these tools to help us and to save time, to reduce bias, to make better decisions. But in our pursuit of speed and scalability, we’ve started to erase the very thing that makes work meaningful: human connection.
If we let machines speak for us — and then let other machines decide whether we’re worth listening to — we risk turning the job market into a cold, transactional loop of data-passing.
AI is here to stay, but the problem is when we let it decide who is worth talking to.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert













