AI is speeding up output across the workplace, but employees are starting to notice what’s slipping through.
A Zety survey of 1,000 U.S. workers highlights the rise of “workslop”—AI-generated work that looks finished but falls apart on accuracy or depth. The bigger issue is that it’s coming from managers.
More than half (55%) of employees say they’ve received workslop from a supervisor. For many, that changes how they see leadership. Eighty-five percent say it hurts their trust, and 74% say it lowers their confidence in the person’s overall work quality.
When Fast Work Starts to Backfire
What seems like a small shortcut can send a louder message. When leaders pass along unchecked AI output, it signals low standards.
That shift is hard to contain. Employees start second-guessing direction, spending more time fixing mistakes, and questioning how decisions are being made in the first place.
It is also changing how people view AI itself. About 45% say workslop has made them more cautious about using AI at work. The tools are getting better, but trust in how they’re used is going the other direction.
No Rules, No Training, No Consistency
Part of the problem is simple: most employees are figuring this out on their own.
Nearly a quarter say they’ve received no AI training at all, and another 45% say guidance has been limited. Only 31% report getting real, ongoing support.
That leaves teams guessing what “good” looks like. The result is uneven work, more errors, and extra time spent cleaning things up.
Employees are not rejecting AI, but they are asking for basic structure—clear standards, better training, and actual review before work gets passed along.
The Real Risk Isn’t AI
The issue is how casually the technology is being used.
When leaders rely on unreviewed output, the damage goes beyond one task, and starts to chip away at credibility.
AI may be making work faster, but in many workplaces, it is also making weak leadership easier to spot.















