Solving Real-World Problems Is Key to Building Trust in AI
There is huge potential for AI to transform our world for the better. From enabling early disease detection and accelerating drug discovery, to addressing critical environmental challenges by discovering sustainable new materials, AI is already advancing progress on some of society’s toughest problems.
As AI reshapes our world, some people are skeptical and leaders face a critical challenge: How do we responsibly harness AI’s potential?
[time-brightcove not-tgx=”true”]Successful product implementation demands more than technical excellence. It requires a fundamental shift in how organizations approach innovation, stakeholder engagement, and solutions. In order to earn people’s trust, leaders must collaborate with local communities, operationalize corporate responsibility, and focus on creating real-world solutions.
The most impactful innovations are built in partnership with the communities they’re meant to serve. To do that, organizations must move beyond traditional stakeholder management to create authentic collaboration channels with expert voices, from ethicists and academics to local populations.
When we bring outside voices into the development process early, we create tech that better reflects the breadth and depth of the human experience. For example, my team collaborates with academic researchers. So in order to amplify the real-world impact of our scientific breakthroughs, we created a dedicated impact accelerator to nurture these partnerships and enable their work.
Leaders should consider creating teams to amplify real-world impact through academic and community partnerships. Our events and conversations with the public have led to more locally relevant and useful applications.
Alongside stakeholder engagement, it’s vital to create internal processes that ensure the highest possible standard of technology development. This isn’t about being cautious to the point of paralysis; rather, it’s about building robust processes that enable us to innovate and iterate responsibly.
The success of AI projects often hinges on how well organizations operationalize their commitment to care. A best practice is to develop frameworks that embed ethical considerations and safety measures in the fabric of any research-and-development process as fundamental building blocks—not bolted-on afterthoughts. Successful implementation requires close collaboration with those who deeply understand your product and your audience. These experts can highlight potential pitfalls and opportunities and ensure that your product seamlessly integrates into people’s daily lives.
At Google DeepMind, this manifests in various ways: from our cross-functional leadership council, which provides ongoing feedback on research, to our comprehensive frameworks for AI development. These structures aren’t bureaucratic hurdles; they’re essential tools that enable us to build AI systems while maintaining alignment with human values.
The antidote to apprehension around AI is to build products that solve real problems, and then highlight those solutions. Organizations can bring stakeholders in early and establish internal processes to operationalize care, but they still need to earn people’s trust.
When AI is perceived as adding clear value, people are more likely to embrace it. AI already powers technology that allows phone batteries to last longer, improves movie and song recommendations, and enables more effective maps and translation. Recently, Google DeepMind announced GenCast, an AI model that can deliver accurate 15-day weather forecasts. That’s the kind of AI we should be building. It’s not just a solution that helps people better respond to extreme weather events like storms, it’s also a practical solution that improves everyday life.
None of us have all the answers about AI’s future. But ensuring that our technological advances serve humanity’s best interests is a business and moral imperative.