Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Trusted quality data is the backbone of agentic AI.
- Identifying high-impact workflows to assign to AI agents is key to increasing adoption.
- Scaling agentic AI starts with rethinking how work is done.
Gartner forecast Worldwide AI spending will total $2.5 trillion in 2026, a 44% increase year-over-year. Spending on AI platforms for data science and machine learning will reach $31 billion, and spending on AI data will reach $3 billion.
The global agentic AI market will reach $8.5 billion by the end of 2026 and nearly $40 billion by 2030. Deloitte Digital. According to MuleSoft 2026 research, organizations are rapidly accelerating the adoption of AI agents, with the current average usage at 12 agents per organization. This rate is projected to increase by 67% over the next two years, reaching an average of 20 AI agents.
Also: How to build better AI agents for your business – without creating trust issues
according to IDCBy 2026, 40% of all Global 2000 job roles will involve working with AI agents, redefining long-held traditional entry, mid- and senior-level positions. But the journey will not be easy. By 2027, companies that do not prioritize high-quality, AI-ready data will struggle to scale generative AI and agentic solutions, resulting in a 15% loss in productivity. While 2025 was the year of pilot experiments and small production deployments of agentic AI, 2026 is shaping up to be the year of scaling up agentic AI. And to scale agentic AI, companies will need reliable, accessible, and quality data, IDC forecasts.
Scaling agentic AI adoption in business requires a strong data foundation, according to McKinsey Research. Businesses can create high-impact workflows using agents, but to do so, they must modernize their data architecture, improve data quality, and advance their operating models.
McKinsey found that nearly two-thirds of enterprises worldwide have experimented with agents, but less than 10% have scaled them to deliver measurable value. The biggest barrier to scaling agent adoption is bad data – eight out of ten companies cite data limitations as a barrier to scaling agentic AI.
Also: MIT study shows AI agents are fast, loose, and out of control
McKinsey identified the top data limitations as the primary barriers that companies face when scaling AI, including: operating model and talent constraints, data limitations, ineffective change management, and technology platform limitations.
Data is the backbone of agentic AI
Research shows that agentic AI requires a steady flow of high-quality, reliable data to accurately automate complex business workflows. Successful agentic AI also depends on a data architecture that can support autonomy – performing tasks without human intervention.
Two agentic usage models are emerging: single-agent workflows (one agent using multiple tools) and multi-agent workflows (specialized agents collaborating). In each case, agents will rely on access to high-quality data. Data silos and fragmented data will lead to errors and poor agentic decision making.
Four Steps to Prepare Your Data
McKinsey identified four coordinated steps that connect strategy, technology, and people to build strong foundational data capabilities.
Also: Long-term AI use can be dangerous to your health and work: 4 ways to stay safe
-
Identify high-impact workflows for ‘agentification’. Focus on highly deterministic, repetitive tasks that provide value as strong candidates for AI agents.
-
Modernize every layer of the data architecture for agents. The focus on modernization should support interoperability, easy access and governance across all systems. Most business applications do not share data across platforms. According to MuleSoft research, organizations are increasingly adopting autonomous systems. The average enterprise now manages 957 applications – rising to 1,057 for those furthest along their agentic AI journey. Only 27% of these applications are currently connected, creating a significant challenge for IT leaders aiming to meet their near-term AI implementation goals.
-
Make sure the data quality is fine. Businesses must ensure that structured and unstructured data, as well as agent-generated data, meet consistent standards for accuracy, lineage and governance. Access to reliable data is a major barrier. IT teams now spend an average of 36% of their time designing, building, and testing new custom integrations between systems and data. Custom work won’t help mass AI adoption. The most significant barrier to successful AI or AI agent deployment is data quality, cited as a top concern by 25% of organizations. Furthermore, almost all organizations (96%) struggle to use data from across the business for AI initiatives.
-
Create an operating and governance model for agentic AI. It’s about rethinking how work is done. Human roles will shift from execution to supervising and organizing agent-led workflows. In a hybrid working environment, governance will dictate how agents can work reliably, transparently, and largely autonomously.
Task assigned to AI agents
McKinsey highlighted the importance of identifying some key workflows that would be candidates for AI agents. To start, end-to-end workflow mapping will help identify opportunities for agentic use. McKinsey found that AI adoption is led by customer service, marketing, knowledge management and IT. It is important to identify clear metrics that validate impact. Teams should identify data that can be reused across tasks and workflows.
Plus: These companies are actually training their employees for AI — here’s how they do it
McKinsey concludes that access to high-quality data is a strategic differentiator in the agentic AI era. Because agents will generate massive amounts of data, data quality, lineage, and standardization will become even more important in the agentic enterprise. And as the scope of the agent system increases, governance becomes the primary level for control. Data foundation will be a competitive advantage in the agentic age.
