March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Why we need an AI safety hotline

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

Москва

Соревнования по рукопашному бою прошли в спецподразделении столичного главка Росгвардии

Premier League clubs showing frustration over secretive Manchester City trial

Rangers Star Insists ‘Some Moments’ Have Showed Gers’ Quality

Types of Bearings and Their Applications: A Look into the Bearing Industry

25 Undefeated Boxers Who Were DESTROYED! Shocking Knockouts You Won't Believe

Ria.city






Read also

Lucy Beaumount says her career was held back by ‘predatory men’ in comedy – after seven figure payout

The World Needs Bigger and Better Financial Firefighters

Climate migration doesn’t look like you think it does

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Types of Bearings and Their Applications: A Look into the Bearing Industry

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

25 Undefeated Boxers Who Were DESTROYED! Shocking Knockouts You Won't Believe



Sports today


Новости тенниса
ATP

Россиянин Алибек Качмазов вышел в основную сетку турнира ATP-250 в Чэнду



Спорт в России и мире
Москва

Каборе: не думаю, что «Зенит» заслуживал стать чемпионом в прошлом сезоне



All sports news today





Sports in Russia today

Москва

Соревнования по рукопашному бою прошли в спецподразделении столичного главка Росгвардии


Новости России

Game News

Эти игры настолько сложны, что доведут вас до безумия


Russian.city


Москва

Главным врагом хорошей улыбки россияне назвали цены у стоматологов


Губернаторы России
PR time

Бах vs. Моцарт. Орган vs Рояль


Президент ТПП РФ поздравил сотрудников Роспотребнадзора с профессиональным праздником

Соцфонд проиндексирует пенсии работающим пенсионерам в феврале

«Граф Монте-Кристо» выходит в России при поддержке Relax FM

«Как ты мог?»: Бакальчук записала обращение к мужу после стрельбы у офиса


Несчастный случай с актёром произошёл на съемках в Нижегородской области

Бах, Вивальди и Реджо

Разведенная Нюша разрыдалась из-за проблем в личной жизни

Внук Пугачевой Пресняков развелся в США после семи лет брака


Евгений Кафельников считает, что российский теннис деградирует

Озвучены позиции казахстанских теннисистов в мировом рейтинге ATP

Блинкова и Шериф стали победительницами парного разряда турнира WTA-250 в Монастире

Tennis.com: Бывший тренер Шараповой и Мыскиной Роберт Лэнсдорп умер в возрасте 85 лет



Росгвардейцы Чувашии стали бронзовыми призерами Чемпионата войск национальной гвардии по мини-футболу

8 и 9 октября в Петербурге выступит Большой симфонический оркестр имени П.И. Чайковского

Главным врагом хорошей улыбки россияне назвали цены у стоматологов

Говорит Земля! Comedy Radio зазвучит в сатирической комедии


Росгвардия обеспечила безопасность на Чемпионате России в Москве

«Евро-Футбол.Ру»: «Спартак» продлит контракт с Максименко

Росгвардейцы Чувашии стали бронзовыми призерами Чемпионата войск национальной гвардии по мини-футболу

Раскрыта тайна трека “На Заре 2020”. Вокалистом проекта “НаЗаре” является финалист шоу “Голос” Сергей АРУТЮНОВ. Несостоявшийся дуэт АРУТЮНОВА и его наставника Басты.


«Ηοвыe люди» бpοcилиcь ɜaщищaть cοтpудничaющeгο cο cпeцcлужбaми Φpaнции Дуpοвa

Боец Калажоков пообещал "утолить свой голод" в поединке с Пасторе

22 сентября пройдет реконструкция «Смоленск город-крепость. 1654»

Высокий градус соперничества: прошли открытые тренировки бойцов из РФ и США



Путин в России и мире






Персональные новости Russian.city
Концерт

Концерт «Бах при свечах»



News Every Day

Types of Bearings and Their Applications: A Look into the Bearing Industry




Friends of Today24

Музыкальные новости

Персональные новости