AI is removing the work that used to develop people.
Entry-level roles were never only about productivity; they were where people figured out how work actually functions. How decisions get made, how mistakes unfold, how judgment takes shape over time.
That form of apprenticeship is vanishing.
Organizations are replacing it with tools that assume employees already know how to think. And then expecting them to oversee work they were never trained to perform.
For decades, expertise was built the same way everywhere. People started at the bottom, learned through repetition, absorbed context through proximity, and gradually took on more responsibility. Over time, they developed the ability to recognize patterns, weigh trade-offs and make decisions with confidence.
AI systems are now handling the work that once lived in those early career roles. Entry-level finance analysts no longer build spreadsheets from scratch. Junior lawyers do not manually review thousands of pages. HR coordinators are not drafting job descriptions line by line. Software engineers are not writing code from the ground up.
AI is transforming how people work. But it is also reconfiguring how people learn, acquire skills and develop professional depth over time.
We tell junior employees to collaborate with AI, delegate tasks to agents, exercise judgment on outputs, iterate, and spot issues, assumptions and biases. In practice, we are asking them to manage a team of digital workers without having built the context and understanding that comes from hands-on experience.
How are professionals supposed to learn any of this if they have never performed the work? How will they build professional depth, judgment and leadership when the traditional pathway that developed those capabilities no longer exists?
What Happens When The Training Ground Disappears
Judgment has always developed through exposure. You start with narrow tasks, then broaden scope. You observe how decisions unfold. You witness consequences. You compare scenarios. You make small errors that do not cost millions.ย
Over time, you internalize patterns that let you repeat the process at a higher level of complexity. You supervise what you once performed. Experience and leadership grow from technical mastery and tenure.
When entry-level roles are compressed or redesigned around overseeing tools rather than executing tasks, they no longer provide that exposure. Junior employees find themselves approving forecasts without understanding the assumptions behind them, validating analysis without seeing how errors surface, and communicating conclusions they never constructed. When someone has never performed the underlying work, developing the depth required to question what they see becomes extremely difficult.
This is the challenge employee development must now address when people no longer learn by performing the work themselves.
In a recent conversation on The Future of Less Work, Arya Bolurfrushan, Founder and CEO of AppliedAI, offered a perspective that reveals an unexpected advantage. When people are no longer trained through repetition, they are also less shaped by how things have always been done. Rather than being molded by legacy processes, they approach outputs with fresh eyes, often asking sharper questions and spotting gaps more quickly.
At the same time, AI is making visible what was previously hidden. As Bolurfrushan described it, knowledge moves from residing in someone’s mind into being institutionalized, where it can then be examined.ย
What once existed as tacit knowledge in experienced professionals’ heads now has to be documented so systems can execute it. That shift creates a new kind of learning environment. Expertise is no longer passed along only through proximity or tenure. It becomes encoded, accessible and open to scrutiny.
A Different Route To Professional Depth
If lawyers depend less on mastering case law, engineers on writing code from scratch and marketers on building campaigns manually, professional depth can no longer rest on step-by-step execution alone. It must come from engaging with how work is performed, understanding where it breaks down and knowing when to push back.
Microsoft’s 2025 Future of Work Report draws a clear distinction between three types of expertise required in AI-mediated work: domain expertise, expertise in working with AI, and expertise in managing AI systems. The research argues that real effectiveness comes from the interaction between them, not from any single dimension in isolation.
The findings highlight a behavioral pattern among experts. Experienced professionals do not hand everything to AI โ they selectively delegate routine tasks while deliberately staying close to higher-order work such as interpretation, synthesis, and decision-making.ย
This suggests that seasoned professionals understand they need to preserve and continue building expertise, continuously calibrating their reliance on AI. That calibration happens when people can observe how the system reaches its outputs, what assumptions it makes, and what trade-offs are embedded in its reasoning.
In other words, expertise develops through interaction, comparison, and adjustment rather than execution alone.
If AI systems are designed as black boxes that deliver polished, final answers, they may improve short-term productivity while eroding long-term expertise. People can use the system without ever developing the capacity to question it. These are choices being made today that will determine how expertise develops tomorrow.ย
Building domain experts in an AI world requires designing systems and workflows that demand continuous judgment. That includes making room for disagreement with AI, requiring human validation at critical points, and creating feedback loops where people see the consequences of their decisions relative to what the system suggested.
This is the new apprenticeship.
It no longer comes from performing the work. It comes from learning how to ask, how to interpret, how to challenge, and when to trust the machine, and why.
A professional in 2030 will not be valued for producing more drafts with the help of technology. They will be valued for identifying the problem worth solving and the distinctly human contribution that ensures judgment, context, and values remain present.ย
They will be able to recognize when a draft is misleading. They will be responsible for integrating multiple tools, assessing bias, validating outputs and communicating implications clearly to stakeholders.
This demands a new curriculum inside organizations. Data literacy cannot be optional. Systems thinking must become a core capability. Ethical risk assessment needs to be woven into everyday practice, not delegated to compliance departments.
In other words, depth moves from manual execution to interpretive authority.
Designing The New Career Ladder
The path from novice to expert must be reimagined to develop humans who can see what the tools cannot.
Instead of learning by repetition alone, employees will need to learn through supervised oversight. A junior analyst might compare AI-generated insights with historical cases to uncover blind spots. A new HR partner might audit algorithmic hiring recommendations against diversity goals and cultural fit. A young product manager might test edge cases the system failed to anticipate.
This supervised oversight builds the judgment that repetition once developed. But it does not happen automatically. Organizations need to design roles, workflows and development paths that deliberately expose people to assumptions, trade-offs and consequences.ย
That means creating space to question outputs, requiring justification for decisions, and making the reasoning behind both human and machine choices visible.
Without this, expertise risks becoming shallow, dependent on tools rather than grounded in understanding. With it, organizations can build a new form of apprenticeship, one that develops depth not through performing the work, but through learning how to see, interpret and challenge it.
















