March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
23
24
25
26
27
28
29
30
News Every Day |

Shannon Vallor says AI does present an existential risk — but not the one you think

2
Vox

You may have heard the idea that AI is a “stochastic parrot,” mechanistically repeating our words back to us without actually understanding them. But Shannon Vallor, a philosopher of technology at the University of Edinburgh, thinks there’s a better metaphor: AI, she says, is a mirror.

After all, a parrot is another mind — not exactly like ours, but a sentient mind nevertheless. A large language model like ChatGPT is not. It reflects back to us our own images, words, and whatever else we’ve put into its training data. When we engage with it, we’re a lot like Narcissus, the mythical boy who sees his beautiful reflection in the water and becomes transfixed by it, thinking it’s another person. 

In her new book The AI Mirror, Vallor argues that it’s our tendency to misperceive AI as a mind — and to think it may even have the capacity to be more moral than us because it’s more “objective” and “rational” — that poses a real existential risk to humanity, not anything the AI itself might do.  

I talked to Vallor about what exactly she thinks that risk is, what she believes it’s doing to human agency, and how it feeds into transhumanism, the movement that says humans should proactively use technology to augment and evolve our species. Here’s a transcript of our conversation, edited for length and clarity. 

You’re not the sort of person who’s kept awake at night by fears that AI will become a conscious mind that maliciously decides to enslave us all. But you do argue that there is a real existential risk posed by AI. What is it? 

The risk I talk about is existential in the philosophical sense of really striking at the core of human beings and our ability to give meaning to our existence. One of the fundamental challenges of being a human being, but also the thing that we often treasure, is that we are not locked into a set of reflexes or mindless responses — that we can, in fact, use cognition to break away from our habits and the social scripts we’re accustomed to, and choose to move in new directions. We can choose new moral patterns for society, new political structures or norms. 

At every point in human history we see moments where individuals or communities chose to alter the pattern. But that requires confidence in ourselves, in one another, in the power of human agency to do this. And also a sort of moral claim on our right to have that power. 

One thing I hear in every country that I travel to to speak about AI is: Are humans really any different from AI? Aren’t we at the end of the day just predictive text machines? Are we ever doing anything other than pattern matching and pattern generation? 

That rhetorical strategy is actually what scares me. It’s not the machines themselves. It’s the rhetoric of AI today that is about gaslighting humans into surrendering their own power and their own confidence in their agency and freedom. That’s the existential threat, because that’s what will enable humans to feel like we can just take our hands off the wheel and let AI drive.

And in some quarters the rhetoric is not only that we can, but that we should let AI do the hard thinking and make the big decisions, because AI is supposedly more rational, more objective.

Exactly — and that you’re somehow failing to be efficient, that you’re failing to cooperate with progress, that you’re failing to enable innovation, if you don’t go along with this. 

When you write about this loss of confidence in human agency, you draw on the existentialists, who argue that there’s no intrinsic meaning in life — it’s something humans need to choose how to create. You especially draw on José Ortega y Gasset, a Spanish philosopher of the early 20th century, and his notion of “autofabrication.” Why is that a key idea for you in the context of AI?

Ortega thought about the core problem of human meaning, which is that we have to make it ourselves. And that’s what he meant by autofabrication, which literally just means self-making. He said that this is the fundamental human condition: to make ourselves over and over again. The job never stops, because our cognitive equipment has the ability to take us into a realm of self-awareness such that we can see what we’re doing and actually decide to change it. 

That freedom is also, from an existential standpoint, kind of a burden, right? Autofabrication is something that takes a fair amount of courage and strength, because the easier thing is to let someone else tell you that the script is final and you can’t change it, so you might as well just follow it, and then you don’t have to burden yourself with the responsibility of deciding what the future looks like for yourself or anyone else.

So what that rhetoric around AI is telling us is to surrender our human freedom, and to me that’s such a profound violation of what is good about being human. The idea that we should give that up would mean giving up the possibility of artistic growth, of political growth, of moral growth — and I don’t think we should do that. 

One of the ways this rhetoric shows up is in the field of “machine ethics” — the effort to build moral machines that can serve as our ethical advisors. Transhumanists are especially bullish about this project. The philosopher Eric Dietrich even argues that we should build “the better robots of our nature” — machines that can outperform us morally — and then hand over the world to “homo sapiens 2.0.” What’s your read on that?

I’ve been skeptical about the moral machines project, because it usually ends up just trying to crowdsource moral judgments [and train AI on those human intuitions] — but the whole point is that the crowd isn’t always right! And so it’s a very dangerous thing to crowdsource moral judgments. If you were using a crowdsourced moral machine that was aggregating moral judgments in Nazi Germany, and then tried to automate decisions elsewhere with that, you would be contributing to the expansion of a morally criminal enterprise. 

Crowdsourcing does seem like a problematic approach, but if we’re not going off what the general population thinks, what are we doing instead? Are we proposing following a few philosopher-kings, in which case there may be concerns about that being undemocratic? 

I think there always has been a better route, which is to have morality remain a contested territory. It has to be open to challenge. Understanding what it is to live well with others and what we owe to one another — that conversation can’t ever stop. And so I’m very reluctant to pursue the development of machines that are designed to find an optimal answer and stop there. 

Right — just operating within what people say about moral norms today seems very different from what you call “standing in the space of moral reasons.” Spell out what you mean by that. 

The “space of reasons” was a concept developed by the philosopher Wilfrid Sellars. It’s the realm in which we can explore each other’s reasons for believing something, where we can justify and seek justification from one another. Other philosophers later adapted his idea of the logical space of reasons to be able to think about the moral space of reasons, because we do this in morality too: when we make moral claims upon one another, especially if they’re new and unfamiliar, we have to justify them. Our reasons have to be accessible to one another, so we can figure out what we jointly recognize and accept. 

I think if we had a truly moral machine, it would be able to stand in that space with us. It would be able to articulate reasons and appreciate our reasons, and negotiate those reasons with us in a way that wasn’t just mirroring the consensus that we’d already reached. Because any machine that’s just going to mirror the familiar moral pattern can get into trouble if we end up in a situation where the environment has changed or is new in some way. 

This reminds me of a particular virtue you write about a lot: practical wisdom, or phronesis, to use the ancient Greek term. What is that and why is it so crucial?

Aristotle wrote that we build up the virtues, like honesty, by practice and habituation. Lying is much easier and gets you what you want, but once you get in the habit of telling the truth, you can actually build up a character where being truthful is what comes easily, and you might even have to struggle in the rare case when you need to tell a lie. 

But there are moments where relying on those habits that you’ve built up can actually lead to harm, because something in the situation is new, and your old habit might not be well adapted to the current situation. Wisdom is the intellectual virtue that allows you to recognize that and change your cultivated response to something better. For example, in the civil rights movement, people were able to say: normally, following the law is the moral thing to do, but right now we realize that it isn’t, and in fact civil disobedience in this context is morally necessary. 

Practical wisdom is built up by practice just like all the other virtues, so if you don’t have the opportunity to reason and don’t have practice in deliberating about certain things, you won’t be able to deliberate well later. We need a lot of cognitive exercise in order to develop practical wisdom and retain it. And there is reason to worry about cognitive automation depriving us of the opportunity to build and retain those cognitive muscles. That’s the risk of intellectual and moral deskilling. It’s already happening and I think we have to resist it. 

When I try to give a charitable read on the transhumanist trend we’re seeing, I think the core emotion underlying it is shame about the human condition. And after two world wars, the use of nuclear weapons, a climate crisis, and so on, it kind of makes sense that humanity would be feeling this shame. So I can psychologically understand how there might be an impulse to run away from all that humanness and move toward machines we think will be more objective, even though I don’t agree with it. And I think the place where I struggle is, how do we know to what extent it does make sense to transform ourselves using technology, without sinking into a profound anti-humanism?

There’s a kind of emptiness in transhumanism in that it doesn’t know what we’ve got to wish for, it just wishes for the power to create something else — to create freedom from our bodies, from death, from our limitations. But it’s always freedom from, but never freedom for. Freedom for what? What is the positive vision that we want to move toward?

There’s a deep optimism I have about the human condition. I think morality is not just driven by fear — it’s driven even more by love, by the experience of mutual care and solidarity. Our very first experience of the good is being cared for by another, whether that’s a mother or father or nurse. To me everything else is about just trying to pursue that in new and more expansive forms. So for me there is a freedom for, and it’s rooted in what it is to be a human animal.

Could there be other creatures who are better than we are? I actually think that’s a question that doesn’t make sense. Better than what? They might be better at being what they are, but I think morality is rooted in a particular form of existence that you have. We exist as a particular kind of social, vulnerable, interdependent animal with a lot of excess cognitive energy. All those things factor into what it is to be moral as a human. For me, this abstraction — the idea of some pure universal morality that creatures who are completely unlike us could somehow do better than we can — I think that just fundamentally misunderstands what morality is.

Москва

Женщине трамваем отрезало ноги в Магнитогорске

Bradford star has weekend to forget with dreadful throw-in before being forced to leave pitch after shoulder pops out

Taarak Mehta Ka Ooltah Chashmah's Mandar Chandwadkar reacts to Dilip Joshi and Asit Modi's altercation rumours

Coach set for crunch talks with Real Madrid giant; could have big bearing on Liverpool colossus

Coach Scaloni delivers key message to Messi and Argentina players about team composition for 2025

Ria.city






Read also

Cops MUST stop wasting time chasing after petty, brain fart tweets while REAL crimes go unsolved, campaigners say

Sports on TV for Friday, Nov. 22

Chris Sanders on getting Roz’s design as the right ‘level of simplicity and charm’ for ‘The Wild Robot’

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Mike Tyson, 58, appears to make retirement U-turn hours after calling out Logan Paul following loss to brother Jake

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Bradford star has weekend to forget with dreadful throw-in before being forced to leave pitch after shoulder pops out



Sports today


Новости тенниса
Кубок Билли Джин Кинг

Кубок Билли Джин Кинг. 1/2 финала. Польша проигрывает Италии, Великобритания поборется со Словакией



Спорт в России и мире
Москва

Дмитровчанки взяли золото на Всероссийских соревнованиях по ушу



All sports news today





Sports in Russia today

Москва

Турнир по шахматам «Семейный МАКСИМУМ» в ТРЦ «Нора»


Новости России

Game News

Sony is rumoured to be snapping up FromSoftware's parent company in the industry's ongoing mission to be consolidated to hell and back


Russian.city


Українські новини

Кормішкін довів, що проти його сім'ї почалася дискредитаційна компанія і фейки видають за правду


Губернаторы России
Алексей Сёмин

LEGENDA выпускает SMART ЦФА


Открытие третьей конференции по развитию туризма в Лоуди в уезде Шуанфэн

LEGENDA выпускает SMART ЦФА

«Юмор FM» – лауреат премии «Бренд года в России 2024»

Филиал № 4 ОСФР по Москве и Московской области информирует: Социальный фонд проинформирует самозанятых о формировании пенсионных прав


Певца Владимира Левкина проводят в последний путь 19 ноября

Кажетта Ахметжанова: 5 признаков того, что у вас светлая душа, излучающая высокие вибрации

В Петербурге простятся с премьером Мариинки Владимиром Шкляровым

Депутат ЗСК Виктор Тепляков провёл обход в Комсомольском сквере в Сочи


Тренер Камельзон раскритиковал поведение теннисиста Медведева

Первая ракетка мира Синнер выиграл теннисный итоговый турнир ATP

Италия в 5-й раз выиграла Кубок Билли Джин Кинг и сравнялась с Россией

Теннисист Синнер выиграл итоговый турнир ATP



Начальник Главного управления Росгвардии по Московской области полковник полиции Пётр Зоров обратился к ветеранам региональной общественной организации войск правопорядка

Начальник Главного управления Росгвардии по Московской области полковник полиции Пётр Зоров обратился к ветеранам региональной общественной организации войск правопорядка

Начальник Главного управления Росгвардии по Московской области полковник полиции Пётр Зоров обратился к ветеранам региональной общественной организации войск правопорядка

В Чехове сотрудники Росгвардии задержали подозреваемого в незаконном обороте наркотиков в крупном размере


«Райтек ДТГ» вошел в Рейтинг работодателей РБК 2024

LEGENDA выпускает SMART ЦФА

Татьяна Тарасова о Гран-при России: Петросян — лидер мирового фигурного катания

Фестиваль "Виват, Театр!" пройдёт в Тамбове с 22 ноября по 1 декабря


Названы победители итогового в 2024 году конкурса «Профессиональное развитие» Фонда Потанина

Суд в Москве осудил экс-вице-губернатора Петербурга Лавленцева на 13 лет

Студенты МГУ пожаловались на мусорный коллапс

Зачем в России хотят ужесточить правила получения медсправки на водительские права



Путин в России и мире






Персональные новости Russian.city
Диана Арбенина

Певица Диана Арбенина призналась, что улетела из России на родину



News Every Day

Taarak Mehta Ka Ooltah Chashmah's Mandar Chandwadkar reacts to Dilip Joshi and Asit Modi's altercation rumours




Friends of Today24

Музыкальные новости

Персональные новости