Organizations are increasing plans to use artificial intelligence agents across business functions. A Deloitte survey of IT and business leaders across 24 countries shows most expect AI agents to play a regular role in operations within the next two years.
By 2027, 74% of respondents expect at least moderate use of AI agents. This includes 23% expecting extensive use and 5% expecting full integration into core business processes.
Governance systems remain underdeveloped
Only 21% of organizations report having mature governance models for agentic AI.
Most respondents indicate gaps in basic controls, including defined limits on autonomous decision-making, real-time monitoring of agent activity, and full audit records that track actions across systems.
Operational risks linked to limited oversight
Without structured controls, AI agents can produce errors that are difficult to detect, trigger conflicting actions across systems, expose sensitive data, create customer-facing issues, or increase exposure to cyber threats.
These risks become more complex when early deployments are expanded into production environments without additional safeguards.
Controlled rollout linked to steadier performance
Organizations reporting more consistent outcomes tend to introduce AI agents through limited, lower-risk use cases before expanding usage. Governance systems are built in parallel rather than after deployment.
Common structures include coordination across IT, legal, compliance, and business teams to define policies, track performance, and manage escalation processes.
Planning pressure in future-of-work systems
AI agents are being introduced into workplace workflows across functions, changing how tasks are assigned between automated systems and human oversight.
The Deloitte findings suggest the pace of planning for agent use is outpacing the development of governance structures needed to manage them at scale.














