We in Telegram
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010
January 2011
February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28
29
30
News Every Day |

Gemini fallout: Former Google employee warns of ‘terrifying patterns’ in company’s AI algorithms

A former high-level Google employee said "terrifying patterns" were discovered in Google's core products and hypothesized how bias may have entered the Gemini artificial intelligence (AI) chatbot.

Whenever there is a problem inside of a Google product, the company has a reporting system called "Go Bad" that can be utilized to document potentially harmful content, according to the source. 

For example, if a user was on Google Image Search and found a result offensive, they could file a report, either internally or externally, which would then be routed to the appropriate trust and safety team.

In 2020, this system did not differentiate between different types of issues, such as identity, diversity, inclusion and discrimination, the former employee claimed. There was no index to find one type of issue versus another.

DOES GOOGLE WANT PEOPLE TO BE 'WOKE'? FORMER EMPLOYEE REVEALS COMPANY RESPONSE TO TRUMP, BIDEN AND BLM

"So, all of the reports about violence, child sexual abuse material (CSAM) and things that were like racist or sexist or, you know, anti-Christian or anti-Jewish or anti-Muslim were all in the same pot," the former Google employee, who spoke with Fox News Digital on the condition of anonymity, said.

Google stated that they have always triaged and treated issues differently and said claims that they would treat something like CSAM in the same way as hate speech are "nonsenisical" and "wrong."

From 2020 to 2022, Google built out a data schema that could track and identify those individual concerns.

However, in 2022, a massive data analysis was done across all of those "Go Bad" reports for upwards of 30 Google products, including Voice Assistant and News.

The former employee said a report about the results was later filed. Statements were made to Google expressing concern about how the algorithm behind these products fundamentally operates after "terrifying patterns" were found.

The former employee said Google's core algorithms power many different products, such as YouTube, Google Search, Google Image Search and Google News, with alterations for each use case. Internal products also utilize variations of this algorithm, making it, as the former employee described it, the company's "money maker."

Google disputed this account and states that this is a mischaracterization of a routing improvement to one of the company's many channels for product feedback. They also said there is no single algorithm underlying all Google products. 

Google has made substantial changes to its algorithm throughout the years. In June 2019, the company announced new changes for Google Search designed to provide "more site diversity" in results. The change meant that users usually wouldn't see more than two listings from the same site in top results. Other ranking changes have also been made.

Google said these updates were made because people wanted to see a broader range of sites in their results. 

In May 2023, Google announced it was using generative AI to add new ways to "find and explore diverse perspectives on Search." The changes added a "Perspective" filter that shows information people share on social media platforms and discussion boards.

FORMER GOOGLE EMPLOYEE: HOW A 'CODE RED' MEETING AND CHATGPT LED EXECS TO TAKE 'SHORTCUTS' IN GEMINI AI LAUNCH

The company also said the AI capabilities would help them quality "experience" as an element of helpful content and Google would continue to focus on information quality and critical attributes such as authoritativeness, expertise, and trustworthiness.

The former employee expressed concern that many of the claims of increased diversity and the particulars of how information is ranked on Google were extremely general. Information given publicly lacked detailed information about how the algorithm behind the products work. 

Google pushed back on the assertion that changes were not transparent, saying it makes thousands of improvements to Search every year and maintains a public website with detailed information about how Search works and how it approaches changes. 

Earlier this year, Google faced intense backlash when the image generation feature in the Gemini AI chatbot produced "diverse" examples of historical figures, including the Founding Fathers and Popes. Google later apologized and paused the feature, but how did this happen? 

Over the last several years, Google created and scaled two different AI research teams, Media Understanding for Social Exploration (MUSE) and Skin Tone, which were adopted into over 50 Google products.

According to Google, MUSE "builds AI-enabled technology designed to understand patterns in how people are portrayed in mainstream media and to inspire more equitable content."

The skin tone palettes used in gadgets and apps to promote "image equity" were announced in May 2022.

Google says the new scale was designed to be more "representative" of all skin tones and found that the new model is more inclusive than the current tech industry standard "especially" for people with darker skin tones.

FORMER GOOGLE CONSULTANT SAYS GEMINI IS WHAT HAPPENS WHEN AI COMPANIES GO 'TOO BIG TOO SOON'

The former employee said the implantation of this scale and other features to promote diversity makes sense in certain situations. For example, if you searched for bridal outfits, you would want a diversity of people wearing the clothing to accommodate all groups.

Now, suppose that the diversity model is built into the searching algorithms and is not turned off in a generative model. In that case, the AI will not be able to differentiate between the historical context and will provide inaccurate results. The former employee suggested that this is likely what happened with Gemini.

"If I say show me the Pope and you show me a Black guy, I'm going to be pissed. But if I say show me an image of like, you know, a group of people playing and it shows only White people, I'm also going to be pissed," the source said.

Google pushed back on the assertion that this had anything to do with the Gemini images issue and pointed to a blog post out out in February that explained what they believe the issue was. They also said the skin tone work had no overlap with Gemini image generation and is intended to do things like improve the lighting in the cameras on phones.

"Once again, these are the rehashed opinions of a single former employee who mischaracterizes how our processes work and who clearly didn't have visibility into how decisions were made. Our business is built on users trusting us to provide accurate information, so we have a clear business objective in keeping our products free of bias," a Google spokesperson told Fox News Digital.

Bias can also enter an AI system in other ways. Because these large language models (LLMs) have massive amounts of text plugged into the system to train it, companies like Google do not have the tools or time to review all the data.

According to the former employee, the absence of certain terms, identities, or words can mean that the model has a negative relationship with them.

If the model is given a bunch of stories about families, but there are no examples of single parents or that group is significantly underrepresented in the data, it may not be able to generate a story about a family with just one parent.

"Generative models are not factual databases. What they are is mathematical guessing engines. And that's where you get hallucinations because they are trained to take your prompt and predict what words should come next based on what you gave it. They'll make s--- up because they don't know that things aren't right," the former employee said.

IS GOOGLE TOO BROKEN TO BE FIXED? INVESTORS 'DEEPLY FRUSTRATED AND ANGRY,' FORMER INSIDER WARNS

Language models create embeddings, which map words with similar semantic contexts or themes. For example, the theme of leadership might include the words King, Queen, castle, horse, and moat.

The AI takes that text and converts it into numerical representations called tokens. It then creates a map of what the language means (vector database).

"Say, generate an image of a man, OK? It doesn't actually understand what you're saying. What it's doing is it's taking your prompt. It's searching across that embedding database to understand meaning. And then it's mathematically predicting every word that it should return to you," the source added.

The former employee said AI bias can be fixed in three places within the model: pre-processing fairness, in-processing fairness and post-processing fairness.

Pre-processing fairness is checking the data and ensuring there is no private information. The company is preparing to train the model.

In-processing fairness is checking to see what the AI learns and making sure it is not learning the wrong things and adjusting accordingly.

Companies like Google can create bias in post-processing fairness through a "policy fix." This means that a tech company tells the algorithm that an image they have deemed harmful cannot appear before a certain point, like the first 10,000 images.

GOOGLE GEMINI USING ‘INVISIBLE’ COMMANDS TO DEFINE ‘TOXICITY’ AND SHAPE THE ONLINE WORLD: DIGITAL EXPERT

In academic literature, post-processing fairness is called "fairness gerrymandering." This is when a company cannot or will not change the model based on the data that went into it or the associations it learned, so the researchers edit the way it responds to the output. Critics typically describe this when they talk about how people can alter answers given by AI.

That is what happens when an AI model like Gemini, ChatGPT, Copilot, Gronk, etc., says it cannot generate a response based on something like race. The model itself does not know that. The former employee claimed that is a layer put in the system between the AI and the people using it to make sure companies do not get in trouble.

"If someone gives a prompt with particular queries, you know, have it default to a general response, that doesn't mean that the model can't produce the response, but that they've edited the way that it appears to the user to make it look more fair," the former employee said.

"It looks fair, but it's not fair," they added.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Москва

Подключение системы отопления в Московской области

Laura Dern Is the Star of Roger Vivier’s New Short Movie

Ramon Cardenas aims to cement his contender status agains Jesus Ramirez Rubio tonight

Paige Spiranac puts on busty display in plunging top as she lists the ‘things that drive me crazy’

Ryan Poles Needs A Last-Minute Review Of His Quarterback Scouting Notes To Ensure Nothing Is Missed

Ria.city






Read also

Giouvetsi recipe (Greek Beef stew with Orzo pasta)

Hailey Cowan reveals nasty leg break, Tamires Vidal gets replacement for UFC Vegas 92 fight

Anticipation builds about upcoming witness whose relationship with Trump turned 'rocky'

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Ryan Poles Needs A Last-Minute Review Of His Quarterback Scouting Notes To Ensure Nothing Is Missed

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Ramon Cardenas aims to cement his contender status agains Jesus Ramirez Rubio tonight



Sports today


Новости тенниса
Новак Джокович

Кто отец Дианы Джокович?



Спорт в России и мире
Москва

Команда подмосковного главка Росгвардии заняла призовое место на чемпионате Центрального округа по стрельбе из боевого ручного стрелкового оружия



All sports news today





Sports in Russia today

Москва

РОСГВАРДИЯ ОБЕСПЕЧИЛА ПРАВОПОРЯДОК ВО ВРЕМЯ ФУТБОЛЬНОГО МАТЧА «ЦСКА» - «СПАРТАК» В МОСКВЕ


Новости России

Game News

Шапки женские на Wildberries — скидки от 398 руб. (на новые оттенки)


Russian.city


News Every Day

NYU Hospital on Long Island performs miraculous surgery


Губернаторы России
Локомотив

В "Локомотиве" иронично отреагировали на ничью в матче ЦСКА со "Спартаком"


Редкий формат загородного жилья появится в Покровском-Стрешнево

Шапки женские вязаные на Wildberries, 2024 — новый цвет от 392 руб. (модель 466)

Более 100 студентов посетило СЛД Курск в рамках акции «Неделя без турникетов»

В павильоне «Газпром» на ВДНХ пройдет Public talk с представителями науки, культуры и медиа


Киркоров оштрафован на 6 тыс. рублей за неуплату другого штрафа

Концерт «Стихи войны и мира. Баллада о своих»

Вывод Песни, Альбома, Клипа в ТОП Музыкальных Чартов – iTunes, Apple Music, Youtube Music, Яндекс.Музыка, ВК и Boom, Spotify.

Сотрудники Росгвардии Московской области написали «Диктант Победы»


WTA озвучила Елене Рыбакиной условия для становления второй ракеткой мира

Потапова всухую обыграла Шнайдер в 1-м круге турнира в Мадриде

Потапова проиграла Фернандес во втором круге турнира WTA в Мадриде

Как Рыбакиной стать второй ракеткой мира: расклад от WTA



Новый транспортный хаб начали строить Казахстан, РФ и Китай

Жёсткие экологические требования решат инновационные энерготехнологии

Шапки женские на Wildberries — скидки от 398 руб. (на новые оттенки)

По запросу Баку в Москве незаконно был задержан известный российский политолог Михаил Александров


Собянин: Москва завершает переход на новый стандарт экстренной медпомощи

«Поборол это гнусное чувство». Актер Галкин — о своей маленькой дочке, зависти и настоящей любви

Отец из Панамы, школьная любовь и любимая еда: Елена Борщёва раскрыла все тайны на шоу ТВ-3 «Вкусно с Анфисой Чеховой»

РБК: педиатра Буянову за фейки про армию России отправили в СИЗО


Амурские учителя отправятся на четвертый Форум классных руководителей

Московского подростка заподозрили в пропаганде терроризма через Telegram

Поселки для многодетных семей: найдет ли подмосковный опыт общероссийское продолжение?

Не стало подписавшего петицию о прекращении огня в Газе режиссёра Лорана Канте



Путин в России и мире






Персональные новости Russian.city
Филипп Киркоров

Киркорова оштрафовали на 6 тысяч рублей из-за неуплаченного штрафа в 3 тысячи



News Every Day

Paige Spiranac puts on busty display in plunging top as she lists the ‘things that drive me crazy’




Friends of Today24

Музыкальные новости

Персональные новости