{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026 May 2026
1 2 3 4 5 6 7 8 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

What’s more likely to be sentient: an ant or ChatGPT?

10
Vox
Sentience is hot these days. | Pete Gamlen for Vox

Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient? 

While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the capacity to have conscious experiences that are valenced, meaning they feel bad (pain) or good (pleasure). It matters for ethics, because a lot of people think that if an entity is sentient, it deserves to be in our moral circle: the imaginary boundary we draw around those we consider worthy of moral consideration. 

While our moral circle has expanded over the centuries to include more people and more nonhuman animals, there are some edge cases we’re collectively unsure about. Should insects have moral rights? What about future AI systems that could potentially become sentient? 

The philosopher Jeff Sebo is an expert on this; he literally wrote a book called The Moral Circle. And he argues that it’s helpful to investigate all potentially sentient beings — from bugs to future AIs — in broadly similar ways. So, after receiving a lot of reader questions on how we should consider both bugs and AIs, and responding to both in recent installments of my Your Mileage May Vary advice column, I reached out to him to talk about how we assess sentience, whether it’s hypocritical to worry about AI welfare while at the same time killing insects without a second thought, and why he developed a thought experiment called “the rebugnant conclusion.” Our conversation, edited for length and clarity, follows.  

How can we go about assessing whether some creature — say, an insect — is sentient?

Our understanding of insect sentience is still limited, in part because we still lack a settled theory of sentience. But we can make progress through “the marker method.” 

The basic idea [for this method] is that we can look for features in animals that correlate with feelings in humans. For example, behaviorally, we can ask: Do other animals nurse their wounds? Do they respond to analgesics like we do? And anatomically, we can ask: Do they have systems for detecting harmful stimuli and carrying that information to the brain? 

This method is imperfect — the presence of these features is not proof of sentience, and the absence is not proof of non-sentience. But when we find many of these features together, it can count as evidence.

What do we find when we look for these features in insects? In at least some insects, there are systems for detecting harmful stimuli, pathways for carrying that information to the brain, regions in the brain for integrating information and flexible decision-making. For example, some insects become more sensitive after an injury, and they also weigh the avoidance of harm against the pursuit of other goals. Some insects also engage in play behaviors — you can find cute videos of bumblebees playing with wooden balls — suggesting that they may be able to experience positive states like joy. Again, none of this is proof of sentience. None of it establishes certainty. But it does count as evidence.

You’ve said that you think insects are about 20-40 percent likely to be sentient. How do you personally deal with bugs that come into your home? 

For me, taking insect welfare seriously means reducing harm to insects where possible. If I find a lone insect in my apartment, I try to safely relocate them if possible. In cases where killing them is genuinely necessary, I at least try to reduce their possible suffering, for example by crushing rather than poisoning them. And, in cases where harmful methods like poisoning seem genuinely necessary, I take this as a sign that structural changes are needed, such as infrastructure changes that reduce human-insect conflict or humane insecticides that kill insects with less suffering.

Caring for individual insects is valuable not only because of how it affects the insects, but also because of how it affects us. 

When I take a moment out of my day to help insects, it conditions me to see them as potential subjects, not mere objects. And if enough people take a moment out of their day to do this, it can contribute to a broader norm of seeing insects this way. This might lead not only to more care for individual insects but also more attention for insect welfare research and policy.

You’ve written that, hypothetically, we could end up determining that large animals like humans have greater capacity to suffer but that small animals like insects have more suffering in total, because there are just so many of them (1.4 billion insects for every person on Earth!). 

Utilitarianism says we have a moral obligation to maximize aggregate welfare, which would imply that we should prioritize insect welfare over human welfare. But most of us would balk at that conclusion. Would you? 

Here we need to distinguish what utilitarianism says in theory and what it says in practice. In theory, utilitarianism says that if a large number of insects experience more happiness in total than a small number of humans, then the welfare of the insects carries more weight, all else being equal. 

This is related to what philosophers like Derek Parfit call “the repugnant conclusion.” They observe that if what matters is total welfare, then it may be better to create a large number of individuals whose lives are barely worth living than a small number of individuals whose lives are very much worth living, as long as it adds up to more happiness overall. I use the term “the rebugnant conclusion” to refer to this idea as it applies in the multi-species context.

In practice, though, utilitarian reasoning is more complex. Yes, we should promote welfare, but we should also respect rights, cultivate virtuous characters, cultivate caring relationships, uphold just political structures, and so on — since this kind of pluralistic thinking tends to do more good than trying to promote welfare by itself would do.

Utilitarianism also says that we should work within our limitations. We currently have greater knowledge, capacity, and political will for helping humans than for helping insects, and this shapes how much care we can sustain. I think this makes sense, and for me, the upshot is we should gradually increase care for insects while building the knowledge, capacity, and political will we need to do more.

To me, the “rebugnant conclusion” is a reductio ad absurdum that shows how utilitarianism falls short as a moral theory. I just don’t think we can expect humans to care more for insects than they do for themselves and other humans; it ignores the fact that we are biologically hardwired to ensure our own surviving and thriving, and that’s an inextricable part of our nature as human moral agents. I’d argue it makes more sense to reject utilitarianism than to ignore that. But it seems like you’d rather keep utilitarianism and just accept the rebugnant conclusion that comes from it — why? 

I disagree that this is a reductio for utilitarianism, for at least a couple reasons. First, I think that this conclusion is more plausible than it might initially appear. 

Think about our duties to other nations and future generations as an analogy. Their interests carry more weight than ours do, all else being equal. But we can still be warranted in prioritizing ourselves to an extent for a variety of relational and practical reasons, all things considered. The question is how to strike a balance between impartial and partial reasoning in everyday life. Here, I think that considering the welfare stakes for distant strangers can be a helpful corrective, since it can lead us to care for them more than we otherwise might, while still tending to relational and practical realities. My view is that we should approach our duties to other species in the same kind of way, and this seems like a plausible enough takeaway to me.

Second, every major ethical theory can seem implausible in at least some cases. Suppose that we share the world with a large number of insects and a small number of advanced AIs. Now, suppose that the insects have more welfare in total, the AIs have more on average, and humans fall somewhere in between. To the extent that welfare matters for decision-making, whose interests should take priority, all else equal? 

If total welfare is what matters, we should say the insects. If average welfare is what matters, we should say the AIs. Either way, this implication will conflict with our default stance of human exceptionalism. 

But part of the point of ethics is to correct for our biases, and this may be what we should do here. In retrospect, we should not have expected the interests of 8 billion members of one species to carry more weight than the interests of quintillions of members of millions of species combined. 

When writing about the possibility of bug sentience, you’ve also written about the possibility of AI sentience. And you’ve said that future AI minds might have a lower chance of being sentient than biological minds, but “even if they do, the astronomically large size of a future artificial population could be more than enough to make up for that.” If we end up in a scenario with a gigantic population of AI minds, do you think we should prioritize their welfare over human welfare? Or is it unreasonable to demand that kind of impartiality from humans?

This is a great question. In my answer to the previous question, I considered a scenario where AIs have the most welfare on average but the least in total. But we can also imagine scenarios where AIs are so complex and so widespread that if they have a realistic possibility of being sentient at all, then they have the most welfare both on average and in total. 

In that situation, insofar as welfare impacts are a factor in moral decision-making at all, as I think they clearly ought to be, a range of reasonable views might converge on the conclusion that the AIs merit priority, all else being equal. 

Of course, as I emphasized in my previous answers, whether we should prioritize them, all things considered, in that scenario is a further question, and it depends on a lot of further relational and practical details. But we should at the very least extend them a great deal of care in that scenario, as we should for other animals.

With that said, a complication is that if we do eventually share the world with a large number of advanced AIs, which currently seems quite likely, then we may not be the only agents who determine what happens. After all, as AIs become more advanced and widespread, they may start to make decisions with us or even for us. In my view, it can help to consider how AIs should treat humans and other animals in these hypothetical future scenarios. And if we think that they should treat us with respect and compassion during their time in power, perhaps this is a sign that we should treat them with respect and compassion during our time in power — not only because how we treat AIs now might affect how they treat us later, but also because thinking about how we would feel in a position of vulnerability can help us better understand how we should behave in our current position of power.

What do you think is more likely to be sentient today: an ant or ChatGPT? I think it’s definitely the former, so it seems bizarre to me that some people spend a lot of time worrying about whether current AI systems may be sentient, while at the same time killing insects without a second thought or eating animals from factory farms. Why do you think this is happening — and is it hypocritical? 

I agree that an ant is more likely to be sentient than ChatGPT today. But, I also think that near-future AIs will be more likely to be sentient than current ones. Companies are racing to build AIs with advanced perception, attention, memory, self-awareness, and decision-making. We have no way of knowing for sure if the companies will succeed, or if these capacities suffice for sentience. But, we also have no way of ruling it out at this stage, and even a realistic possibility warrants taking the issue seriously now. 

At minimum, I think that means acknowledging AI welfare as a serious issue, assessing models for welfare-relevant features, and preparing policies for treating them with appropriate moral concern. Otherwise, we risk repeating the mistake we made with animals: scaling up industrial uses of them that will make it harder for us to treat them well when the evidence of sentience is stronger.

With that said, I agree that caring a lot about AI welfare while not caring at all about animal welfare can involve a kind of hypocrisy. There are real differences between animals and AI systems, but there are also real similarities. In both cases, we have to make decisions that affect nonhumans without knowing for sure what, if anything, it feels like to be them. I think it helps to assess these issues in broadly similar ways while acknowledging the differences.

Ria.city






Read also

Report: Padres pitching prospect pleads guilty to charge of transporting noncitizen immigrants

Toronto’s first WNBA game is no thing of beauty, but sellout crowd finds plenty to cheer

Four States Are Now Monitoring Potential Hantavirus Cases

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости