This article is based on the recent Future of Work® Podcast episode How Empathy and AI Together Will Redefine the Future of Work with Sophie Wade. Click here to watch or listen.
The corporate rush to deploy AI is moving faster than most organizations’ ability to explain how work actually happens inside their walls. Tools are being introduced into teams with unclear roles, inconsistent processes, and vague expectations — conditions that make any technology adoption harder to evaluate, govern, or scale.
Sophie Wade, a renowned workforce innovator and founder of Flexcel Network, whose courses on future of work skills have reached over 650,000 professionals worldwide, joined us the The Allwork.Space Future of Work® Podcast to explain why AI is surfacing organizational confusion that leaders have ignored for years.
Wade has been advising organizations on workforce transformation long before AI became a board-level obsession. What she keeps returning to, especially in recent conversations, is that AI is exposing chaos, rather than creating it.
Most organizations, she argues, don’t actually know how work gets done. Roles are vaguely defined, responsibilities sprawl, and performance is often measured by presence rather than outcomes. When AI tools are introduced into this environment, the confusion becomes impossible to ignore.
Leaders ask employees to “use AI” without specifying for what, to what extent, or with what expectations. Employees respond by experimenting unevenly, often in isolation, with no shared understanding of what good usage looks like.
Productivity gains stall not because AI underperforms, but because the work itself was never designed with clarity.
Task-Level Thinking Is the Missing Step
A core point Wade makes is that AI adoption fails when organizations think in terms of jobs instead of tasks.
Most roles are made up of dozens of activities: researching, drafting, reviewing, coordinating, deciding, communicating. AI can assist with some of these and is inappropriate for others. But many companies skip the hard work of breaking roles down, leaving employees to guess where automation is encouraged and where it may be risky.
Wade emphasizes that without this task-level clarity, employees either overuse AI — creating quality, trust, or compliance issues — or underuse it, fearing mistakes or judgment. In both cases, the organization learns nothing because usage isn’t structured, visible, or comparable across teams.
Why Gen Z Is Behaving Differently — and Why That Matters
Wade pays close attention to how different generations interact with new tools, not as a cultural curiosity, but as a signal.
She notes that Gen Z workers tend to test AI across a wider range of activities. They are less attached to fixed role definitions and more focused on capabilities, which helps them prepare for a labor market where adaptability matters more than tenure.
More experienced workers often apply AI selectively, improving efficiency in tasks they already understand well. Wade argues that organizations need both approaches — experimentation and judgment — but most fail to create environments where the two inform each other.
Instead of learning from generational differences, many leaders default to control, restricting usage or issuing vague guidelines that satisfy legal teams but don’t help employees work better.
Training Isn’t Enough if the Job Stays the Same
Another point Wade repeatedly raises is that AI training is being treated as a checkbox. Employees attend sessions on prompts or tools, then return to roles that haven’t changed at all.
She argues that capability building only works when roles evolve alongside skills. If AI removes certain tasks, something else needs to replace them — higher-value work, deeper analysis, better decision-making. Without this redesign, employees either feel underutilized or overwhelmed, expected to do the same amount of work plus “AI on top.”
This mismatch is where frustration sets in, particularly for high performers who quickly see what could be done differently but lack authority to change how work is structured.
Leadership Behavior Is the Constraint
Wade is blunt about leadership being the limiting factor in AI adoption.
Many executives endorse AI publicly while privately avoiding it themselves. Employees notice. When leaders can’t explain how they use AI in their own work, it undermines credibility and increases anxiety. Workers are left navigating unclear expectations with real career risk.
She stresses that effective leaders do three things differently: they define boundaries for experimentation, they acknowledge uncertainty instead of pretending to have answers, and they focus on outcomes rather than monitoring behavior.
What Organizations Are Actually Being Asked to Do
From Wade’s perspective, AI is demanding that companies become more intentional.
That means:
- Defining what good work looks like in an AI-enabled role
- Redesigning responsibilities instead of stacking tools onto old jobs
- Allowing space for learning without penalizing missteps
- Treating workforce design as a leadership responsibility, not an HR project
AI makes weak structures visible. Organizations can either address that reality or let it compound.
The Real Question AI Forces Leaders to Answer
The question, Wade suggests, is whether leaders are willing to examine how work actually functions inside their organizations.
AI exposes assumptions about productivity, trust, expertise, and control. Companies that confront those assumptions have a chance to build more resilient, more human systems of work.
Those that don’t will keep adding tools and wondering why nothing improves.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert















