March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Plans must be made for the welfare of sentient AI, animal consciousness researchers argue

Computer scientists need to grapple with the possibility that they will accidentally create sentient artificial intelligence (AI) — and to plan for those systems’ welfare, a new study argues.

The report published on Thursday comes from an unusual quarter: specialists in the frontier field of animal consciousness, several of whom were signatories of the New York Declaration on Animal Consciousness.

As The Hill reported in April, that declaration argued that it was “irresponsible” for scientists and the public to ignore the growing evidence of widespread sentience across the animal kingdom.

The AI welfare report builds on a moral and intellectual framework similar to that of the animal consciousness one from April: the idea that humans tend to perceive sentience only in their own image, creating risks for both the beings they live among — or create — and themselves.

Data suggesting sentience in birds and mammals — and even crabs and shrimp — far outweighs any evidence for self-awareness in the cutting-edge machine tools humans are developing, acknowledged Jeff Sebo, a bioethicist at New York University who co-wrote both the AI welfare report and the animal consciousness declaration.

But while the probability of creating self-aware artificial life over the next decade might be “objectively low,” it’s high enough that developers need to at least give it thought, Sebo said.

While it is generally assumed that consciousness in humans — or, say, octopuses — arose by accident, humans are actively tinkering with AI in a way deliberately intended to mimic the very characteristics associated with consciousness.

Those include “perception, attention, learning, memory, self-awareness” — abilities that may have gone hand-in-hand with the evolution of consciousness in organic life.

Consciousness research is the current site of fierce debate over what the preconditions of consciousness really are; whether it requires squishy cells made of chains of carbon molecules or a physical body. 

But Sebo said there is little we currently understand about consciousness that forecloses the possibility that AI developers could create conscious systems accidentally, in the process of trying to do something else — or intentionally, because “they see conscious AI as safer or more capable AI.”

In some cases, the work of developing these systems is a literal attempt to mimic the structures of likely-sentient organic life. In findings published in Nature in June, Harvard and Google’s DeepMind created a virtual rat with a simulated brain that was able to emulate the flesh-and-blood rodents’ “exquisite control of their bodies.”

There is no particular reason to believe that the digital rat — for all the insight it provided into how vertebrate brains function — was self-aware, though DeepMind itself has a job posting for a computer science PhD able to research “cutting-edge social questions around machine cognition [and] consciousness.” 

And sentience, as both animal researchers and parents of infants understand, is something entirely separate from intelligence.

But in a sense, this is the problem Sebo and his coauthors are raising in a nutshell. They contend that developers — and the public at large — have evolutionary blind spots that have set them up poorly to deal with the age of possibly-intelligent AI.

“We're not really designed by an evolution and lifetime learning to be perceiving or tracking the underlying mechanisms,” said Rob Long, a coauthor of Thursday’s paper and executive director at Eleos AI, a research group that investigates AI consciousness.

Over billions of years, Long said, our lineage evolved “to judge the presence or absence of a mind based on a relatively shallow set of rough and ready heuristics about how something looks and moves and behaves — and that did a good job of helping us not get eaten.”

But he said that brain architecture makes it easy to mis-attribute sentience where it doesn’t belong. Ironically, Sebo and Long noted, that makes it easiest to attribute sentience to those machines least likely to have it: chatbots.

Sebo and Long argued this paradox is almost hard-wired into chatbots, which increasingly imitate the defining characteristics of human beings: the ability to speak fluently in language, a characteristic that companies like OpenAI have bolstered with new models that laugh, use sarcasm and insert “ums” and vocal tics.

Over the coming decades, “there will be increasingly sophisticated and large-scale deployments of AI systems framed as companions and assistance in a situation where we have very significant disagreement and uncertainty about whether they really have thoughts and feelings,” Sebo said.

That means humans have to “cultivate a kind of ambivalence” towards those systems, he said: an “uncertainty about whether it feels like anything to be them and whether any feelings we might have about them are reciprocated." 

There is another side to that ambivalence, Sebo said: the possibility that humans could deliberately or accidentally create systems that feel pain, can suffer or have some form of moral agency — the ability to want things and try to make them happen — that he argued sit poorly alongside the things that computer scientists want those systems to do.

In the case of animals, the consequences of under-ascribing sentience are clear, Sebo noted. “With farm animals and lab animals, we now kill hundreds of billions of captive farmed animals a year for food, and trillions of wild-living animals per year — not entirely but in part because we underestimated their capacity for consciousness and moral significance.”

That example, he said, should serve as a warning — as humans try to “improve the situation with animals” — of what mistakes to avoid repeating with AI.

Sebo and Long added that another major problem for humans trying to navigate the new landscape, aside from a species-wide tendency to see sentience in — but only in — that which looks like us, is a pop-culture landscape that wildly mischaracterizes what actually sentient AI might look like.

In movies like Pixar’s Wall-E and Steven Spielberg’s AI, sentient robots are disarmingly human-like, at least in some key ways: they are single, discrete intelligences with recognizably human emotions who live inside a body and move through a physical world.

Then there is Skynet, the machine intelligence from the Terminator series, which serves as a magnet for AI safety conversations and thereby constantly draws popular discourse around emerging computer technologies back toward the narrative conventions of a 1980s action movie.

None of this, Sebo argued, is particularly helpful. “With AI, welfare, truth could be stranger than fiction, and we should be prepared for that possibility,” he said. 

For one thing, digital minds might not be separate from each other in the way that human and animal minds are, Sebo said. “They could end up being highly connected with each other in ways that ours are not. They could have neurons spread across different locations and be really intimately connected to each other.” 

That form of consciousness is potentially more akin to that of an octopus, which has a central brain in its head and eight smaller, semi-independent brains in its arms.

AI, Sebo said, could bring “an explosion of possibilities in that direction, with highly interconnected minds — and questions that arise about the nature of self and identity and individuality and where one individual ends and where the next individual begins.”

No matter what form potential AI consciousness may ultimately take — and whether it is possible at all — Sebo, Long and their coauthors argued that it is incumbent on AI developers to begin acknowledging these potential problems, assessing how they fit into the tools they are building and prepare for a possible future in which those tools are some flavor of sentient.

One possible idea of what this could look like has been offered by the University of California Riverside philosopher Eric Schwitzgebel, who has argued for a policy of “emotional alignment” in which the degree of sentience an AI program presents should be directly related to how sentient it is likely to be.

If humans someday design sentient AIs, Schwitzgebel has written, “we should design them so that ordinary users will emotionally react to them in a way that is appropriate to their moral status. Don't design a human-grade AI capable of real pain and suffering, with human-like goals, rationality, and thoughts of the future, and put it in a bland box that people would be inclined to casually reformat.”

And, by contrast, “if the AI warrants an intermediate level of concern — similar, say, to a pet cat — then give it an interface that encourages users to give it that amount of concern and no more.”

That is a policy, Sebo acknowledged, that would force the chatbot and large language model industry into a dramatic U-turn. 

Overall, he said, he and the new article's other co-authors wrote it to force conversation on an issue that must be confronted before it becomes a problem. “And we think that it would be good for people building these extremely capable, complex systems to acknowledge that this is an important and difficult issue that they should be paying attention to.”

Москва

Собянин рассказал о развитии дорожной сети возле станций метрополитена

Lindsay Hubbard's Baby Shower Details Revealed, Including Which 'Summer House' Co-Stars Attend

Navy veteran’s defamation suit against CNN inches towards trial as judge hears motions for summary judgment

You need the eyes of a movie hero to spot the 5 horror villains lurking near the crime scene in under 90 secs

Lennox Lewis Has No Doubt How Anthony Joshua vs Daniel Dubois Rematch Goes: “He’ll Go After Him”

Ria.city






Read also

Photos show the biggest moments of the 2024 election so far

OPINION - The real story behind the Labour-Trump row that everyone missed

The secret sauce to Brother Hubbard’s tastiest dishes? Turns out it’s yogurt.

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Bay Area high school football: Weekend scoreboard, how Top 25 fared

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Lennox Lewis Has No Doubt How Anthony Joshua vs Daniel Dubois Rematch Goes: “He’ll Go After Him”



Sports today


Новости тенниса
Надежда Петрова

Теннисная школа экс третьей ракетки мира Петровой открылась в Татарстане



Спорт в России и мире
Москва

Серпуховские гимнастки завоевали 30 медалей на соревнованиях в Москве



All sports news today





Sports in Russia today

Москва

Прогулочные маршруты и спортивные площадки: Четырём зелёным зонам на юге Москвы подарили новую жизнь


Новости России

Game News

AMD's Dr. Lisa Su predicts AI GPU market will grow to $500 billion by 2028 or 'roughly equivalent to annual sales for the entire semiconductor industry in 2023'


Russian.city


Москва

Собянин наградил выдающихся москвичей за вклад в развитие города и страны


Губернаторы России
ЛокоТех

СЛД «Вязьма» компании «ЛокоТех-Сервис» посетил начальник Московской железной дороги в рамках осеннего комиссионного осмотра.


В организации "Диалог" раскрыли детали создания ассоциации по фактчекингу

Хотел свободы после шести сроков: вора в законе Васю Бандита вернули в Россию

В Мытищах состоялась отчетно-выборная конференция профсоюза жизнеобеспечения

В Мытищах состоялась отчетно-выборная конференция профсоюза жизнеобеспечения


Игорь Бутман: "Я люблю, когда у музыкального коллектива каждый день новый город"

“Фанагория” получила специальный приз «Золотой Дионис» на Top100Wines 2024 за “ценность и достоинство вне времени”

Суд в Москве оставил под стражей женщину-курьера по делу квартиры Ларисы Долиной

Певица Наталия Иванова рассказала, как оказалась на кастинге шоу «Голос»


Россиянка Шнайдер с победы стартовала на турнире WTA в Гонконге

Карен Хачанов выиграл девять из последних десяти матчей на турнирах ATP

Рахимова обыграла Приданкину и вышла в 1/4 финала турнира WTA в Цзюцзяне

Теннисистки Соболенко и Рыбакина сыграют в одной группе на Итоговом турнире



Отделение СФР предоставило 11 жителям региона с нарушениями зрения собак-поводырей

Компания «Мария» рассказала о новых решениях для девелопмента на конференции Московского Бизнес-клуба

Защитите свой автомобиль от зимнего гнева! Экспертный уход Aqua Complex – это просто

В международный день врача прошла премия THE MEDICAL STARS AND BEAUTY AWARDS


Ринат Билялетдинов: "У казанцев есть шансы в Москве против "Локомотива".

Санкт-Петербурге сотрудники ОМОН Росгвардии спасли упавшую в реку девушку

СЛД «Вязьма» компании «ЛокоТех-Сервис» посетил начальник Московской железной дороги в рамках осеннего комиссионного осмотра.

Собянин наградил выдающихся москвичей за вклад в развитие города и страны


Блогер Аяз Шабутдинов пробудет в СИЗО до 3 февраля 2025 года

В Москве начала работу Евразийская каникулярная школа «Язык мира» для учителей русского языка

Началась регистрация на чемпионат «Московские мастера»

Генеральный директор «Выберу.ру» представил итоги второго комплексного исследования клиентской открытости микрофинансовых компаний на MFO Russia Forum



Путин в России и мире






Персональные новости Russian.city
Игорь Бутман

Разводы с американками, дети за границей и жена на 32 года младше. Джаз, закруживший Игоря Бутмана



News Every Day

Lindsay Hubbard's Baby Shower Details Revealed, Including Which 'Summer House' Co-Stars Attend




Friends of Today24

Музыкальные новости

Персональные новости