AI Snake Oil: Separating Hype From Reality in Artificial Intelligence
By Arvind Narayanan & Sayash Kapoor
(Princeton University Press, 360 pages, $21.52)
Artificial intelligence has become the proverbial elephant in the room. We vacillate between the fear that the mammoth creature will trample us into intellectual oblivion and the gut feeling that a mirage of our own making has duped us. Furthermore, the ongoing hype and hyperbole about AI’s ability to either fix every problem or implement a dystopian-level new world order only exacerbate the situation.
Artificial intelligence is not a figment of our imaginations nor a giant beast programmed to destroy us by surveilling our behavior, censoring our free speech, or stealing our livelihoods. It is also not a “great and powerful Oz” capable of granting our every wish. A new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How to Tell the Difference, aspires to separate the reality from the fantasy. The authors, Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, are gifted with an amazing ability to describe complex technology in simple and engaging language. AI Snake Oil provides a comprehensive history of AI as well as a detailed explanation of the differences between generative AI, predictive AI, and the moderation of AI content. Moreover, the book also serves as a blueprint for vetting AI iterations in terms of their accuracy, reliability, and broader social and commercial applications. AI Snake Oil also weighs in on the perennial question of whether AI is an existential threat.
The concept of a manufactured entity with the ability to execute programmed tasks has been the subject of fiction for hundreds of years, with Mary Shelley’s Frankenstein (1818) being perhaps the most archetypical example. While it may seem like ChatGPT catapulted artificial intelligence from the domain of the imagination to that of the quotidian virtually overnight, the road to AI started over 80 years ago. The authors do a masterful job of illustrating the breadcrumb-strewn path from the early days of AI research to the present day. In 1943, neuroscientist Warren McCulloch and logician Walter Pitts developed a mathematical model based on the operational functionality of a neuron that would become the precursor for the earliest prototypes of machine intelligence. McCulloch and Pitts postulated that it was possible to replicate the way a neuron sends a signal to another neuron within the brain by building a mechanical version of a single neuron. Psychologist Frank Rosenblatt achieved this objective in the late 1950s by creating a custom computer that integrated a perceptron camera system. Perceptron was a revolutionary technology in that it could differentiate between different shapes without having the shapes manually programmed into its memory. The perceptron could only execute binary classifications, meaning it could only differentiate between two different inputs. Yet, Rosenblatt’s deus-ex-machina still represented a giant step forward for artificial intelligence.
The book’s title, AI Snake Oil, is apropos in that it not only highlights AI’s trajectory from intellectual concept to laboratory test kitchen to commodity product but also in that it reinforces the importance of sincerity and reasoned expectations from product development to marketing to user experience. Much of the contemporary writing about AI advances a narrative that once a technology is created, it automatically separates from the human minds that conceived it like the monster escaping Frankenstein’s laboratory. The authors deviate from this perception by reinforcing the notion that AI’s capacity for good or evil is intrinsically linked to its developers, merchants, and users. As users of AI, we need to be cognizant of the technology’s capability and wary of misleading or exaggerated marketing.
Narayanan and Kapoor acknowledge that their book champions the potential of generative AI (AI that can create new content, text, images, audio, etc.) but recommend that society approach predictive AI with measured caution. The authors reference “early studies [that] show the potential of generative AI for assisting writers, doctors, programmers. and many other professionals.” They additionally assert that generative AI has the capability to significantly transform the lives of subsets of the population, including the disabled.
Although Narayanan and Kapoor argue that predictive artificial intelligence that uses “machine learning and statistical analysis to predict future events” has merit in certain circumstances, they maintain that there are significant limitations and risks to its broader application. These limitations have a variety of reasons, including a scarcity of available data, measuring features that do not increase predictability, weak feedback loops, and failure to act strategically on data collected. And, of course, there will always be unpredictable occurrences that take place as a result of randomness or unique circumstances. Moreover, the authors warn that the inaccurate deployment of predictive artificial intelligence could have serious and potentially harmful results. AI Snake Oil is filled with examples of unforeseen consequences of unreliable artificial intelligence.
A compelling one is that of the EPIC sepsis model. In 2017, EPIC, a U.S.-based healthcare company that maintains healthcare records on 250 million people in the United States, launched an AI product to detect sepsis. By using the sepsis model, hospitals no longer had to use equipment or staff to collect data on the likelihood that a patient would contract sepsis, effectively saving significant money and resources. The model that was implemented in hundreds of hospitals across the country was widely praised for being a “plug and play” tool that required no customization. However, EPIC claimed that its model was a “ proprietary trade secret” and consequently never released its underlying logic. Furthermore, there were no independent peer-reviewed studies of its efficacy until 2021 (four years after its release), when the University of Michigan released the first independent study. Its findings were startling. EPIC had boasted a relative accuracy rate between 76 and 83 percent, but the study revealed that the model’s relative accuracy was significantly lower, at only 63 percent. Furthermore, it was later discovered that EPIC was paying hospitals millions of dollars in credits, with one of the conditions being the implementation of the sepsis prediction model. EPIC ended up pulling its model from the marketplace in 2022.
Although Narayanan and Kapoor expound upon the present-day dangerous pitfalls of predictive artificial intelligence, they remain hopeful that predictive models will improve as the developers obtain more experience in building them. The authors also write about the role of artificial intelligence in content moderation, particularly on social media platforms. While they are optimistic that AI content moderation will increasingly mimic human content moderation, they reference the challenges of developing moderation strategies that allow for cultural differences, enforce multiple policies, and mitigate moderation evasion. Furthermore, they assert that content moderation should be adaptive to continuous social change. The authors additionally reinforce the need for a firewall on social media platforms between the functional areas that generate revenue streams and those that oversee content. They cite the Reddit model, which engages volunteer users for content moderation instead of employees or contractors, as one to consider emulating. But they also agree that there are inherent challenges in building a business that relies upon volunteers for a core function.
I highly recommend AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How to Tell the Difference. Arvind Narayanan and Sayash Kapoor have done a marvelous job of demystifying artificial intelligence by explaining the technology’s strengths and weaknesses as a content generator, outcome predictor, and content moderator. Furthermore, they remind us to be wary of attributing too much power to AI. After all, the technology is only a reflection of the humans behind it.
READ MORE:
How I Learned to Stop Worrying and Love AI
An Eye on AI: Five New Things to Watch in October
The post <i>AI Snake Oil</i>: Separating Hype From Reality in Artificial Intelligence appeared first on The American Spectator | USA News and Politics.