OpenAI and Cisco Back Standards to Scale Agentic AI
Right alongside the immense promise of agentic AI is the issue of trust. CFOs, according to PYMNTS Intelligence, are gaining it. So are consumers. As the issue of trust continues to present itself, the more useful argument is how to overcome those issues.
Agentic AI is moving from early experiments to real tests inside large companies. According to reporting attributed to CIO, Gartner expects 40% of enterprise software to include task-specific AI agents by the end of 2026. Yet most organizations are moving carefully. Many are limiting agents to lower-risk functions while they study performance. Leaders remain uneasy about handing decisions to software that can be hard to explain and may not always produce consistent results. A McKinsey survey of 2,000 enterprises found that 62% are experimenting with AI agents, but two-thirds have not begun meaningful rollouts. Companies want stronger governance and closer human oversight before expanding deployments.
The CIO article argues that governance and security must come first. Collibra’s survey of more than 300 data and AI leaders found that 86% expect positive returns from agentic AI in the medium term. At the same time, 60% said governance and compliance training is a priority, even though fewer than half have formal processes in place. Vendors are responding by building in guardrails, observability tools, and clearer controls over how agents access data and external systems. The formation of the Agentic AI Foundation by OpenAI, Anthropic, Block and other major technology firms signals a push toward shared standards. The piece draws a parallel to the early internet, arguing that open standards and stronger security were essential before eCommerce could scale. Agentic AI now stands at a similar starting line.
Cisco’s View
In an article published by the World Economic Forum and written by Cisco President and Chief Product Officer Jeetu Patel, the case for agentic AI is framed as a matter of national competitiveness. The piece argues that autonomous AI agents can drive productivity, innovation and economic growth. But realizing those gains requires decisive action from business and government leaders. Patel identifies three barriers that stand in the way: infrastructure limits, a lack of trust and gaps in data strategy. Without addressing these issues, agentic AI will remain experimental rather than transformational.
The first challenge is infrastructure. Agentic AI systems demand far more computing power, energy and network capacity than traditional software. Multi-agent systems, where many AI agents collaborate in real time, require modern data centers, low-latency networks and secure architectures that can operate across cloud and edge environments. The second challenge is trust. Because AI models can behave unpredictably, organizations must embed security, identity controls and real-time validation across every layer of the technology stack.
The third challenge is data. As public data sources diminish and privacy rules tighten, enterprises must turn to machine-generated and synthetic data to train and improve AI systems. The article concludes that leaders who invest in modern infrastructure, build security into every deployment and rethink how data is created and governed will be best positioned to lead in the next phase of AI.
“The future belongs to those who understand both the immense opportunities of AI and how to navigate the obstacles ahead,” Patel says.
Insights for Credit Unions
In an article published by CUInsight, Chad Davis of F5 argues that the future of agentic AI in financial services depends on one core factor: account holder trust. Agentic AI systems can act, adapt and collaborate with limited human input. They promise more personalized service, faster decisions and stronger fraud detection. But Davis makes clear that none of this progress will matter if customers do not believe the systems are fair, secure and transparent. Financial institutions must ensure that automated decisions are explainable, compliant with regulations and open to human review when needed.
The article highlights three current use cases where trust is essential: personalized financial advice, automated small-dollar lending decisions and enhanced fraud detection. In each case, institutions must clearly explain how decisions are made and how customer data is protected.
Looking ahead, Davis sees potential for autonomous financial advisors and streamlined micro-lending models. However, he stresses that institutions must first solve governance and technical challenges, including observability, decision traceability, integration with legacy systems and strong policy guardrails. The message is direct. Responsible oversight and clear communication are not optional. They are the foundation for sustainable innovation in agentic AI.
“Transparent, explainable, and compliant agentic AI is not just a regulatory necessity,” Davis says. “It’s essential for future sustainability and success.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post OpenAI and Cisco Back Standards to Scale Agentic AI appeared first on PYMNTS.com.