{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
News Every Day |

The robots who predict the future

To be human is, fundamentally, to be a forecaster. Occasionally a pretty good one. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid being hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival. Indeed, as the tools of divination have changed over the centuries, from tea leaves to data sets, our conviction that the future can be known (and therefore controlled) has only grown stronger. 

Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. As I write this sentence, algorithms on some remote server are busy trying to guess my next word based on those I have already typed. If you’re reading this online, a separate set of algorithms has likely already served you an ad deemed to be one you are most likely to click. (To the die-hards reading this story on paper, congratulations! You have escaped the algorithms … for now.)

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here.

So how did all this happen? People’s desire for reliable forecasting is understandable. Still, nobody signed up for an omnipresent, algorithmic oracle mediating every aspect of their life. A trio of new books tries to make sense of our future-­focused world—how we got here, and what this change means. Each has its own prescriptions for navigating this new reality, but they all agree on one thing: Predictions are ultimately about power and control.

The Means of Prediction: How AI Really Works (and Who Benefits)
Maximilian Kasy
UNIVERSITY OF CHICAGO PRESS, 2025

In The Means of Prediction: How AI Really Works (and Who Benefits), the Oxford economist Maximilian Kasy explains how most predictions in our lives are based on the statistical analysis of patterns in large, labeled data sets—what’s known in AI circles as supervised learning. Once “trained” on such data sets, algorithms for supervised learning can be presented with all kinds of new information and then deliver their best guess as to some specific future outcome. Will you violate your parole, pay off your mortgage, get promoted if hired, perform well on your college exams, be in your home when it gets bombed? More and more, our lives are shaped (and, yes, occasionally shortened) by a machine’s answer to these questions.

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here. This arrangement is leading to a crueler, blander, more instrumentalized world, one where life’s possibilities are foreclosed, age-old prejudices are entrenched, and everyone’s brain seems to be actively turning into goo. It’s an outcome, according to Kasy, that was entirely predictable. 

AI adherents might frame those consequences as “unintended,” or mere problems of optimization and alignment. Kasy, on the other hand, argues that they represent the system working as intended. “If an algorithm selecting what you see on social media promotes outrage, thereby maximizing engagement and ad clicks,” he writes, “that’s because promoting outrage is good for profits from ad sales.” The same holds true for an algorithm that nixes job candidates “who are likely to have family-care responsibilities outside the workplace,” and the ones that “screen out people who are likely to develop chronic health problems or disabilities.” What’s good for a company’s bottom line may not be good for your job-hunting prospects or life expectancy.

Where Kasy differs from other critics is that he doesn’t think working to create less biased, more equitable algorithms will fix any of this. Trying to rebalance the scales can’t change the fact that predictive algorithms rely on past data that’s often racist, sexist, and flawed in countless other ways. And, he says, the incentives for profit will always trump attempts to eliminate harm. The only way to counter this is with broad democratic control over what Kasy calls “the means of prediction”: data, computational infrastructure, technical expertise, and energy.  

A little more than half of The Means of Prediction is devoted to explaining how this might be accomplished—through mechanisms including “data trusts” (collective public bodies that make decisions about how to process and use data on behalf of their contributors) and corporate taxing schemes that try to account for the social harm AI inflicts. There’s a lot of economist talk along the way, about how “agents of change” might help achieve “value alignment” in order to “maximize social welfare.” Reasonable, I guess, though a skeptic might point out that Kasy’s rigorous, systematic approach to building new public-serving institutions comes at a time when public trust in institutions has never been lower. Also, there’s the brain goo problem. 

To his credit, Kasy is a realist here. He doesn’t presume that any of these proposals will be easy to implement. Or that it will happen overnight, or even in the near future. The troubling question at the end his book is: Do we have that kind of time?

Reading Kasy’s blueprint for seizing control of the means of prediction raises another pressing question. How on earth did we reach a point where machine-mediated prediction is more or less inescapable? Capitalism, might be Marx’s pithy response. Fine, as far as it goes, but that doesn’t explain why the same kinds of algorithms that currently model climate change are for some reason also deciding whether you get a new kidney or I get a car loan.

The Irrational Decision: How We Gave Computers the Power to Choose for Us
Benjamin Recht
PRINCETON UNIVERSITY PRESS, 2026

If you ask Benjamin Recht, author of The Irrational Decision: How We Gave Computers the Power to Choose for Us, he’d likely tell you our current predicament has a lot to do with the idea and ideology of decision theory—or what economists call rational choice theory. Recht, a polymathic professor in UC Berkeley’s Department of Electrical Engineering and Computer Science, prefers the term “mathematical rationality” to describe the narrow, statistical conception that stoked the desire to build computers, informed how they would eventually work, and influenced the kinds of problems they would be good at solving. 

This belief system goes all the way back to the Enlightenment, but in Recht’s telling, it truly took hold at the tail end of World War II. Nothing focuses the mind on risk and quick decision-making like war, and the mathematical models that proved especially useful in the fight against the Axis powers convinced a select group of scientists and statisticians that they might also be a logical basis for designing the first computers. Thus was born the idea of a computer as an ideal rational agent, a machine capable of making optimal decisions by quantifying uncertainty and maximizing utility.

Intuition, experience, and judgment gave way, says Recht, to optimization, game theory, and statistical prediction. “The core algorithms developed in this period drive the automated decisions of our modern world, whether it be in managing supply chains, scheduling flight times, or placing advertisements on your social media feeds,” he writes. In this optimization-­driven reality, “every life decision is posed as if it were a round at an imaginary casino, and every argument can be reduced to costs and benefits, means and ends.”

Today, mathematical rationality (wearing its human skin) is best represented by the likes of the pollster Nate Silver, the Harvard psychologist Steven Pinker, and an assortment of Silicon Valley oligarchs, says Recht. These are people who fundamentally believe the world would be a better place if more of us adopted their analytic mindset and learned to weigh costs and benefits, estimate risks, and plan optimally. In other words, these are people who believe we should all make decisions like computers. 

How might we demonstrate that (unquantifiable) human intuition, morality, and judgment are better ways of addressing some of the world’s most important and vexing problems?

It’s a ridiculous idea for multiple reasons, he says. To name just one, it’s not as if humans couldn’t make evidence-based decisions before automation. “Advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950,” Recht writes. “From the late 1800s to the early 1900s, we had world-changing scientific breakthroughs in physics, including new theories of thermodynamics, quantum mechanics, and relativity.” We also managed to build cars and airplanes without a formal system of rationality and somehow came up with societal innovations like modern democracy without optimal decision theory. 

So how might we convince the Pinkers and Silvers of the world that most decisions we face in life are not in fact grist for the unrelenting mill of mathematical rationality? Moreover, how might we demonstrate that (unquantifiable) human intuition, morality, and judgment might be better ways of addressing some of the world’s most important and vexing problems?

Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI
Carissa Véliz
DOUBLEDAY, 2026

One might start by reminding the rationalists that any prediction, computational or otherwise, is really just a wish—but one with a powerful tendency to self-fulfill. This idea animates Carissa Véliz’s wonderfully wide-ranging polemic Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI

A philosopher at the University of Oxford, Véliz sees a prediction as “a magnet that bends reality toward itself.” She writes, “When the force of the magnet is strong enough, the prediction becomes the cause of its becoming true.” 

Take Gordon Moore. While he doesn’t come up in Prophecy, he does figure somewhat prominently in Recht’s history of mathematical rationality. A cofounder of the tech giant Intel, Moore is famous for his 1965 prediction that the density of transistors in integrated circuits would double every two years. “Moore’s Law” turned out to be true, and remains true today, although it does seem to be running out of steam thanks to the physical size limits of the silicon atom.

One story you can tell yourself about Moore’s Law is that Gordon was just a prescient guy. His now-classic 1965 opinion piece “Cramming More Components onto Integrated Circuits,” for Electronics magazine, simply extrapolated what computing trends might mean for the future of the semiconductor industry. 

Another story—the one I’m guessing Véliz might tell—is that Moore put an informed prediction out into the world, and an entire industry had a collective interest in making it come true. As Recht makes clear, there were and remain obvious financial incentives for companies to make faster and smaller computer chips. And while the industry has likely spent billions of dollars trying to keep Moore’s Law alive, it’s undoubtedly profited even more from it. Moore’s Law was a helluva strong magnet. 

Predictions don’t just have a habit of making themselves come true, says Véliz. They can also distract us from the challenges of the here and now. When an AI boomer promises that artificial general intelligence will be the last problem humanity needs to solve, it not only shapes how we think about AI’s role in our lives; it also shifts our attention away from the very real and very pressing problems of the present day—problems that in many cases AI is causing.

In this sense, the questions around predictions (Who’s making them? Who has the right to make them?) are also fundamentally about power. It’s no accident, Véliz says, that the societies that rely most heavily on prediction are also the ones that tend toward oppression and authoritarianism. Predictions are “veiled prescriptive assertions—they tell us how to act,” she writes. “They are what philosophers call speech acts. When we believe a prediction and act in accordance with it, it’s akin to obeying an order.”

As much as tech companies would like us to believe otherwise, technology is not destiny. Humans make it and choose how to use it … or not use it. Maybe the most appropriate (and human) thing we can do in the face of all the uninvited daily predictions in our lives is to simply defy them. 

Bryan Gardiner is a writer based in Oakland, California.

Ria.city






Read also

'You're kidding right?' Karoline Leavitt snaps at question on Trump's 'racist' label whine

Daltonganj Forest Division Plans Monkey Food Court to Curb Crop Raids

‘Not My Business’: U.S.-Born Skier Eileen Gu Competing for China Is a ‘Skeptic’ of Uyghur Genocide Claims

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости