Generative AI is already creating a new layer of corporate waste. BetterUp and Stanford’s workslop research found that 40% of U.S. desk workers received AI-generated junk in the last month, and BetterUp’s deeper breakdown says employees spend about 1 hour and 51 minutes fixing each incident, a hidden tax that can top $9 million a year at a 10,000-person company.
The problem is that companies keep putting AI into workflows no one truly owns. When output arrives polished but context-free, coworkers become editors, fact-checkers, and cleanup crews.
Workslop Is a Context Problem
Workslop looks like progress because many AI rollouts are still organized like procurement exercises. McKinsey’s 2025 report on AI in the workplace says 92% of companies plan to increase AI investment, yet only 1% consider themselves mature in deployment. That gap explains a lot.
Buying a vendor copilot is easy; redesigning work is hard. An outside tool can draft a summary or generate a slide, but it rarely understands the exceptions in your claims queue, the approval logic in your finance process, or the tone your customers trust.
BetterUp’s definition of workslop gets to the point: content that looks good, lacks substance, and pushes the real thinking onto someone else.
The People Closest to the Work Should Build the First Tools
The strongest case for employee-built AI is operational, not ideological. In the NBER study on generative AI in customer support, researchers found a 14% productivity gain on average and a 34% gain for novice workers when AI was embedded in a real workflow. The lesson is simple: AI becomes useful when it carries local knowledge, not when it floats above the job.
MIT Sloan research on worker voice in generative AI makes the same point from another direction, arguing that bottom-up involvement improves the odds that AI helps both organizations and workers.
That is why companies should stop defaulting to turnkey vendor apps and start letting teams build narrow, owned tools on approved no-code and low-code stacks. Recruiters can create interview brief generators tied to their scorecards. Finance can build variance explainers tied to approved data. Operations can create exception handlers that route messy cases to the right human. With low-code automation tools such as Power Automate and similar platforms, employees can shape prompts, data sources, and handoff rules without waiting for a giant software roadmap.
Ownership changes behavior. People are less likely to spray out slop when they have to maintain the tool, answer for the output, and live with the consequences.
Low-Code Governance Turns AI Into a Real System
None of this works as a free-for-all. Microsoft’s guidance on low-code governance is explicit that citizen development needs policy, security, approval flows, and IT oversight. Deloitte’s framework for citizen developers makes the same argument: low-code speeds delivery, but organizations still need training, guardrails, and a way to prevent sprawl. PMI’s citizen developer model frames it correctly: employees should build alongside IT, not in a silo.
That is the real alternative to vendor dependence. Companies should buy platforms, not finished answers; set shared rules for data access, testing, and human review; and give every internal AI tool a named owner, a business metric, and a retirement date.
A vendor can provide infrastructure. It cannot provide accountability. The best defense against workslop is a culture where the person using AI still owns judgment, and the people closest to the work have the power to improve the tool.
Conclusion
Workslop spreads when AI feels rented. It shrinks when AI feels owned. The companies that get this right will not win because they bought the flashiest assistant. They will win because employees built systems that fit the work, improved them in public, and stayed accountable for the result. That is how AI stops performing productivity and starts producing it.














