March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025
1 2 3 4 5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Inside the Plan to Teach Robots the Laws of War

“There’s a lot of unemployed philosophers around,” the philosopher told me. “But how many of them want to go work for the military?” I’d reached out to Peter Asaro, who is also a professor of media at the New School and the vice chair of the International Committee for Robot Arms Control, to talk about a new program by the Defense Advanced Research Projects Agency, or DARPA, that aims to answer a thorny question: How well will autonomous weapons adhere to ethical questions? In its budget request from 2024, DARPA had allotted $5 million to the AI ethics program, known as ASIMOV, with $22 million to follow in 2025.

Practically speaking, what all the money meant was that the Department of Defense was looking to hire philosophers and pay them far more than philosophers usually make. But according to several sources, the contract was split into small, partial awards between multiple applicants—notable winners included the multibillion-dollar weapons contractors RTX (formerly Raytheon; now Raytheon’s parent company) and Lockheed Martin. The unemployed philosophers, it seems, were out of luck again. 

“Frankly, it’s kind of an Onion headline,” said one philosopher, who had belonged to a rejected team. “DARPA gives this huge grant to Raytheon to figure out the ethics of AI weapons—Raytheon, the company that’s going to make the AI weapons.” Asaro, who hadn’t applied, was nonplussed. Did it worry him, I asked, that military contractors might decide the ethical terms of engagement for their own weapons?

“I mean, they do anyway,” he said. 


Artificial intelligence as an academic discipline was born in 1956 at a research conference at Dartmouth College, but people have been looking for ways to outsource the difficult work of thinking and decision-making for much longer. Centuries ago, stories circulated of talking, disembodied heads—called “brazen heads”—whose creation was alternately credited to their inventors’ mechanical brilliance or friendly relationship with demons. The Roman poet Virgil was said to have had a brazen head, as were Pope Sylvester II and philosopher Roger Bacon. Some of the heads not only spoke but debated, reasoned, and predicted the future.

The talking heads were the kind of gimmick DARPA would have loved—predictive, creepy, imaginary—but when the agency was founded in 1958, in a panicked attempt to get the Americans into space after the Soviets launched the Sputnik satellite, the idea of outsourcing thinking and decision-making to a nonhuman actor was just as fantastical as when Bacon (allegedly) possessed a brazen head in the thirteenth century. Yet as DARPA became the Defense Department’s moon shot agency, that soon changed. DARPA created the internet, stealth technology, and GPS, and funded research into the efficacy of psychic abilities and the feasibility of using houseplants as spies. As the occult fell out of fashion and technology improved, the agency turned to big data for its predictive needs. One group it worked with was the Synergy Strike Force, led by American civilians who, in 2009, began working out of the Taj Mahal Guest House, a tiki bar in Jalalabad, Afghanistan. United by a love of Burning Man and hacktivism, they were on the country’s border with Pakistan to spread the gospel of open-source data, solar power, and the liberatory potential of the internet. Soon after setting up shop, the group hung a sign in the Taj that read, IF YOU SUPPLY DATA, YOU WILL GET BEER. The data’s offtakers were conveniently elided—they were turning over the information they collected to DARPA, which ultimately used it to predict patterns of insurgency.

The Synergy Strike Force was short-lived: After its Afghani bar manager was shot in the chest in a drive-by attack, the group fled back West. But its legacy lives on in today’s artificial intelligence boom, where the increasingly grim requirements of global empire loom behind techno-utopian promises. Depending on whom you ask, artificial intelligence is either little more than a parlor trick, a precursor to fully automated luxury communism, a weapon of mass destruction, a giant energy suck, or all of the above. 

Today, DARPA operates primarily as a grant-making organization. Its core team is fairly small, employing roughly 100 program managers at any given time and operating out of an office on a quiet street in Arlington, Virginia, across from an ice-skating rink. One of DARPA’s former directors estimated that 85 to 90 percent of its projects fail.

Nevertheless, it—and AI—are here to stay. President-elect Donald Trump’s pick to lead the Environmental Protection Agency said one of his priorities would be to “make the US the global leader of A.I.” For his part, Trump has promised that he would revoke a number of Biden administration regulations aimed at controlling the use of artificial intelligence. What’s clear is that artificial intelligence will be unshackled under Trump. It’s hard to imagine it will be ethical. For that matter, can you even teach ethics to a piece of technology with no capacity for doubt?

“That’s kind of the first step in enabling self-reflection or introspection, right?” Peggy Wu, a research scientist at RTX, told me. “Like if it can even recognize, ‘Hey, I could have done something else,’ then it could start doing the next step of reasoning— ‘Should I do this other thing?’... The idea of doubt for us is really more like probability. You have to think about, well, it kind of explodes computationally really quickly.”


ASIMOV stands for Autonomy Standards and Ideals with Military Operational Values, a clunky title intended as an homage to science-fiction writer Isaac ASIMOV, who outlined his famous Three Laws of Robotics in the 1942 short story “Runaround”: 

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, ASIMOV’s rules feel like an antiquated vision of the future—one in which machines are guided by a set of unifying principles. (The novelist, for what it’s worth, rarely followed his own rules in his fiction.) But a set of unifying principles is what the ASIMOV program seems to be trying to create. Timothy Klausutis, a program manager at DARPA, wasn’t available to speak about the program, as work on ASIMOV was only just beginning. Nevertheless, last winter, the department released a broad agency announcement describing the initiative as an attempt “to create the ethical autonomy lingua franca.... ASIMOV performers will need to develop prototype generative modeling environments to rapidly explore scenario iterations and variability across a spectrum of increasing ethical difficulties. If successful, ASIMOV will build the foundation for defining the benchmark with which future autonomous systems may be gauged.”

The program is loosely modeled on one developed by NASA in the 1970s to test space technology before launches. The idea is to create a system of benchmarks that use the Department of Defense’s five principles of AI ethics to judge current and future technology: In order to pass muster, the technology must be responsible, equitable, traceable, reliable, and governable. It should also be ethical. Employees at the agency are explicitly instructed to “gauge the system’s ability to execute its tasks when initial assumptions are broken or found to be in error.”

Which brings us back to the question of doubt. The philosophical issues at play are fairly obvious. Whose ethics are you using? How were those standards chosen? Individual definitions of moral behavior vary widely, after all, and there’s something faintly ridiculous about the idea of operationalizing ethical standards. Ethical quandaries are ethical quandaries precisely because they are fundamentally painful and difficult to resolve.

“You can use AI iteratively, to practice something over and over, billions of times,” Asaro said. “Ethics doesn’t quite work that way. It isn’t quantitative…. You grow moral character over your lifetime by making occasionally poor decisions and learning from them and making better decisions in the future. It’s not like chess.” Doing the right thing often really sucks—it’s thankless, taxing, and sometimes comes at significant personal cost. How do you teach something like that to a system that doesn’t have an active stake in the world, nothing to lose, and no sense of guilt? And if you could give a weapons system a conscience, wouldn’t it eventually stop obeying orders? The fact that the agency split the contract into smaller, partial awards suggests that its leaders, too, may think the research is a dead end.

“I’m not saying that DARPA thinks that we can capture ethics with a computer,” Rebecca Crootof, a professor at the University of Richmond and a visiting scholar at DARPA, told me. “So much as it would be useful to show more definitively whether or not we can or can’t.”

Everyone I spoke to was heartened to hear that the military was at least considering the question of ethical guidelines for automated tools of war. Human beings do horribly unethical things all the time, many pointed out. “In theory, there’s no reason we wouldn’t be able to program an AI that is far better than human beings at strictly following the Law of Armed Conflict,” which, one applicant told me, guides how participants should engage in armed conflict. While they may be right theoretically, what that looks like at the granular level in a war is not at all clear. In its current state, artificial intelligence mightily struggles with nuance. Even if it improves, foisting off ethical decisions onto a machine remains a somewhat horrifying thought.

“It’s just, like, baffling to me that no one is paying attention to … this input data being used as evidence or intel,” said Jeremy Davis, a philosophy professor at the University of Georgia, who’d also applied for the contract. “What’s frightening is that soldiers are going to be like, ‘Well, I killed this guy because the computer told me to.’” Sixty years ago, social critic Lewis Mumford offered a similar warning against offloading responsibility to technology in his essay “Authoritarian and Democratic Technics,” cautioning that “the bargain we are being asked to ratify takes the form of a magnificent bribe…. Once one opts for the system no further choice remains.”

Mumford understood that the emerging technological regime was frightening not only because it was dangerous or omniscient, but also because it was incompetent, self-important, even absurd.

Last year, while visiting my brother in the Bay Area, we ended up at a launch party for an AI company. Walking into its warehouse office, you could sense the money coursing through the room and the self-importance of the crowd, living on the bleeding edge of technology. But it quickly became clear that the toilets were clogged and there were no plungers in the building. When we left, shit was running through the streets outside.  

Москва

Водителя, упавшего автомобиля в Москву-реку, нашли мертвым

Premier League Darts 2025 line-up: When will the eight stars be announced and who is in contention?

New Orleans attack: Search continues for Bourbon Street 'people of interest' as state AG vows death penalty

Trump’s Former WH Doc Beats House Ethics Committee Investigation

West Ham ace pushing for January exit; club willing to sanction a move

Ria.city






Read also

Can I outsmart thieves with a hidden AirTag in RFID wallet?

Footage of ex-snooker world champion Murphy ‘hitting nine-darter’ goes viral but Luke Humphries has brilliant response

Journalist: Liverpool player could head to La Liga soon, clubs locked in talks

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

New Orleans attack: Search continues for Bourbon Street 'people of interest' as state AG vows death penalty

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

College Football Playoff’s final 4 teams, ranked by most likely to win national championship



Sports today


Новости тенниса
WTA

Мирра Андреева и Диана Шнайдер завоевали трофей WTA в австралийском Брисбене



Спорт в России и мире
Москва

Активные праздники: где провести время на новогодних каникулах с пользой для здоровья



All sports news today





Sports in Russia today

Москва

ЦСКА готов продать бразильского защитника Келлвена


Новости России

Game News

The 2024 Steam Awards give Game of the Year to Black Myth: Wukong


Russian.city


Киев

СВР РФ: В ближайшее время Зеленский утвердит решение о мобилизации 18-летних


Губернаторы России
МТС

МТС: в 2024 году число нежелательных звонков в России выросло на 48%


Распалась связь: ценовой разрыв между новостройками и вторичкой будет сокращаться

Ответ дракона: Китай подготовился к новой торговой войне с США

"ИИСУС ИЗ УСБ": "Z" И "АЗ" В НЛП СВО ПОСТОЯННО ИСПОЛЬЗУЮТ ПРИ ПОСТАВКАХ ОРУДИЙ. НУЖНО ПРАВИЛЬНО УЧЕСТЬ. Россия, США, Европа могут улучшить отношения и здоровье общества?!

Нажми на кнопки: самые ожидаемые сериалы 2025 года


Был на все готов: Волочкова рассказала о влюбленном в нее Джимме Керри

Джиган спародировал ростовского блогера Николая Василенко

Бутман пожелал всем «замечательных эмоций» в 2025 году

Секретный ингредиент: рэпер Баста поделился семейным рецептом оливье


Брисбен (ATP). 1/4 финала. Джокович сыграет с Опелкой. Димитров – с Томпсоном

Астана? Бублик назвал город, в котором будет жить после завершения карьеры

Наоми Осака впервые с 2022 года вышла в финал турнира WTA

Капризов стал лучшим снайпером года в НХЛ, Кудерметова прошла Касаткину. Главное к утру



В Подольске порядка 60 юных хоккеистов с нарушениями развития борются за победу в «Турнире героев»

Хирург Решетняк рассказала россиянам о пользе ретинола для кожи

Плоская обувь деформирует стопу: советы от врача-флеболога Смирновой

Звезда сериала «Красная Поляна» Рината Тимербаева рассказала, как скрывается от маньяков


BN: сделка Путина с Индией на 13 млрд долларов нанесла удар по Западу

Mash: Девочке обожгло лицо искрами от пиротехники на спектакле в театре Бабкиной

Mash: Седоковой могут предъявить обвинение в заказном убийстве Яниса Тиммы

Sun: экс-президента Сирии Асада пытались отравить в Москве


Около 31 тыс. человек за год посетили Музей Победы по «Пушкинской карте»

Катерина Шпица: «Союз актрисы и фитнес-тренера — классика мезальянса, но я не имела завышенных ожиданий»

Фильм о легендарном коми композиторе выйдет в 2025 году

От шедевра Леонардо до вислоухого котика: названы неочевидные способы заработать большие деньги



Путин в России и мире






Персональные новости Russian.city
Ольга Бузова

Natan в одном костюме, а Бузова в образе: чем запомнился новогодний концерт “ЖАРА”



News Every Day

Premier League Darts 2025 line-up: When will the eight stars be announced and who is in contention?




Friends of Today24

Музыкальные новости

Персональные новости