City governments have spent years publishing open data, but much of that information remains difficult for residents to use. Boston is now testing whether artificial intelligence agents could change that by turning static datasets into conversational interfaces. If the model works, city data could become one of the first large environments where AI agents interact directly with public systems and citizens.
From Dashboards to Dialogue
Boston Chief Information Officer Santi Garces is experimenting with AI agents that connect to public datasets using emerging standards such as the Model Context Protocol. The protocol allows AI systems to interface with external data sources and tools, enabling agents to retrieve structured information directly from databases.
In an interview, Garces described open data as a practical testing ground for AI agents because the information is already public. The goal is to make city data easier to access without requiring residents to navigate complex portals or spreadsheets.
The traditional model for city open data has been publication without engagement. Governments release datasets and assume residents will find and interpret them. Most do not. The interface problem has proved nearly as significant as the data availability problem.
AI agents offer a different architecture. Instead of locating the right dataset and interpreting rows of permit data or transit schedules, a resident could ask a plain language question and receive a direct answer. Queries such as “Is my block’s street sweeping schedule changing?” or “What permits have been pulled on this property?” could be handled instantly.
Boston’s work suggests the city is thinking about interoperability at the infrastructure layer, not just building standalone AI applications. By connecting agents directly to public datasets through standardized protocols, cities could allow AI systems to retrieve information across multiple government databases while maintaining governance controls.
Boston has long experimented with civic technology through initiatives run by New Urban Mechanics, including neighborhood pilot programs such as Beta Blocks.
The city has also modernized its service infrastructure. Boston recently deployed a new 311 platform, replacing a legacy system that had operated for more than a decade. According to reporting by Government Technology, the system uses predictive machine learning to recommend next steps in service workflows and generative AI to support natural language interactions.
Signal From Massachusetts
Boston’s experiments are unfolding alongside broader AI initiatives across Massachusetts.
In February, Gov. Maura Healey announced that the state would deploy ChatGPT across the executive branch, covering roughly 40,000 state employees. The rollout uses a secure environment designed to protect government data and prevent employee inputs from training public AI models.
The initiative reflects growing interest in using generative AI to modernize government operations. But it also highlights a broader division in how public sector organizations are approaching AI deployment.
Some systems are designed primarily for internal productivity, helping government workers summarize documents, draft reports or analyze policy data. Others aim to improve how residents interact with public services.
Both categories are expanding. But citizen-facing systems could have a greater long-term impact by reshaping how people access government information.
Boston’s initiative also highlights a larger possibility for agentic AI. While most early deployments focus on enterprise workflows, government systems present another environment where AI agents could operate at scale.
Public-sector interest in AI is expanding beyond local governments. At the federal level, agencies are beginning to deploy artificial intelligence to automate administrative work, analyze large datasets and support decision-making across departments, as reported by PYMNTS.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.