March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010
January 2011
February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

AI Ethics: An Overview

7

Artificial intelligence has progressed to the point where machines are capable of performing tasks that people once thought could only be done by humans. This rise in the power of AI highlights the importance of ethics in AI – we must use this powerful technology in responsible ways.

For example, modern artificial intelligence is capable of understanding and creating art, carrying on intelligent conversations, identifying objects by sight, learning from past experience, and making autonomous decisions.

Organizations have deployed AI to accomplish a wide range of tasks. AI creates personalized recommendations for online shoppers, determines the content social media users see, makes health care decisions, determines which applicants to hire, drives vehicles, recognizes faces, and much more.

Given the countless business opportunities that this new technology brings, the global market for AI technologies has exploded over the past decade and is continuing to grow. Gartner estimates that customers worldwide will spend $65.2 billion on AI software in 2022, an increase of 21.3 percent from the previous year.

While AI technology is new and exciting and has the potential to benefit both businesses and humanity as a whole, it also gives rise to many unique ethical challenges.

Also see: Top AI Software 

Examples of Unethical AI

News stories have no shortage of examples of unethical AI.

In one of the more well-known of these cases, Amazon used an AI hiring tool that discriminated against women. The AI software was designed to look through resumes of potential candidates and choose those that were most qualified for the position. However, since the AI had learned from a biased data set that included primarily male resumes, it was much less likely to select female candidates. Eventually, Amazon stopped using the program.

In another example, a widely used algorithm for determining need in healthcare was systematically assessing Black patients’ need for care as lower than white patients’ needs. That was problematic because hospitals and insurance companies were using this risk assessment to determine which patients would get access to a special high-risk care management program. In this case, the problem occurred because the AI model used health care costs as a proxy for health care need, without accounting for disparities in how white and Black populations access health care.

But discrimination isn’t the only potential problem with AI systems. In one of the earliest examples of problematic AI, Microsoft released a Twitter chatbot called Tay that began sending racist tweets in less than 24 hours.

And a host of other less widely published stories have raised concerns about AI projects that seemed transphobic, that violated individuals’ privacy, or in the case of autonomous vehicles and weapons research, put human lives at risk.

Challenges of AI Ethics

Despite the many news stories highlighting concerns related to AI ethics, most organizations haven’t yet gotten the message that they need to be considering these issues. The NewVantage Partners 2022 Data and AI Leadership Executive Survey found that while 91 percent of organizations are investing in AI initiatives, less than half (44 percent) said they had well-established ethics policies and practices in place. In addition, only 22 percent said that industry has done enough to address data and AI ethics.

So what are the key challenges that organizations should be addressing?

Bias

As we have already seen, perhaps the biggest challenges to building ethical AI is AI bias. In addition to the cases already mentioned, the AI criminal justice tool known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is one egregious example. The tool was designed to predict a defendant’s risk of committing another crime in the future. Courts, probation, and parole officials then used that information to determine appropriate criminal sentences or who gets probation or parole.

However, COMPAS tended to discriminate against Black people. According to ProPublica, “Even when controlling for prior crimes, future recidivism, age, and gender, Black defendants were 45% more likely to be assigned higher risk scores than white defendants.” In actuality, Black and white defendants reoffend at about the same rate — 59 percent. But Black defendants were receiving much longer sentences and were less likely to receive probation or parole because of AI bias.

Because humans created AI and AI relies on data provided by humans, it may be inevitable that some human bias will make its way into AI systems. However, there are some obvious steps that should be taken to mitigate AI bias.

And while situations like the COMPAS discrimination are horrifying, some argue that on the whole, AI is less prone to bias than humans. Difficult questions remain concerning to what degree bias must be eliminated before an AI can be used to make decisions. Is it sufficient to create an AI system that is less biased than humans, or should we require that the system is closer to having no biases?

Data Privacy

Another huge issue in AI ethics is data privacy and surveillance. With the rise of the internet and digital technologies, people now leave behind a trail of data that corporations and governments can access.

In many cases, advertising and social media companies have collected and sold data without consumers’ consent. Even when it is done legally, this collection and use of personal data is ethically dubious. Often, people are unaware of the extent to which this is going on and would likely be troubled by it if they were better informed.

AI exacerbates all these issues because it makes it easier to collect personal data and use it to manipulate people. In some instances, that manipulation is fairly benign, such as steering viewers to movies and TV programs that they might like because they have watched something similar. But the lines get blurrier when the AI is using personal data to manipulate customers into buying products. And in other cases, algorithms might be using personal data to sway people’s political beliefs or even convince them to believe things that aren’t true.

Additionally, facial recognition AI software make it possible to gather extensive information about people by looking at photos of them. Governments are wrestling with the question of when people have the right to expect privacy when they are out in public. A few countries have decided that it is acceptable to perform widespread facial recognition, while some others outlaw it in all cases. Most draw the lines somewhere in the middle.

Privacy and surveillance concerns presents obvious ethical challenges for which there is no easy solution. At a minimum, organizations need to make sure that they are complying with all relevant legislation and upholding industry standards. But leaders also need to make sure that they are doing some introspection and consideration of whether they might be violating people’s privacy with their AI tools.

Transparency

As already mentioned, AI systems often help make important choices that greatly affect people’s lives, including hiring, medical, and criminal justice decisions. Because the stakes are so high, people should be able to understand why a particular AI system came to the conclusion that it did. However, the rationale for determinations made by AI is often hidden from the people who are affected.

There are several reasons for this. First, the algorithms that AI systems use to make decisions are often protected company secrets that organizations don’t want rival companies to discover.

Second, the AI algorithms are sometimes too complicated for non-experts to easily understand.

Finally, perhaps the most challenging problem is that an AI system’s decision is often not transparent to anyone, not even to people who designed it. Deep learning, in particular, can result in models that only machines can understand.

Organizational leaders need to ask themselves whether they are comfortable with “black box” systems having such a large role in important decisions. Increasingly, the public is growing uncomfortable with opaque AI systems and demanding more transparency. And as a result, many organizations are looking for ways to bring more traceability and governance to their artificial intelligence tools.

Liability and Accountability

Organizations also need to worry about liability and accountability.

The fact that AI systems are capable of acting autonomously raises important issues about who should be held responsible when something goes wrong. For example, this issue arises when autonomous vehicles causing accidents or even deaths.

In most cases, when a defect causes an accident, the manufacturer is held responsible for the accident and required to pay the appropriate legal penalty. However, in the case of autonomous systems like self-driving cars that make their own decisions, legal systems have significant gaps. It is unclear when the manufacturer is to be held responsible in such cases.

Similar difficulties arise when AI is used to make health care recommendations. If the AI makes the wrong recommendation, should its manufacturer be held responsible? Or does the practitioner bear some responsibility for double-checking that the AI is correct?

Legislatures and courts are still working out the answers to many questions like these.

Self-Awareness

Finally, some experts say that AI could someday achieve self-awareness. This could potentially imply that an AI system would have rights and moral standing similar to humans.

This may seem like a farfetched scenario that is only possible in science fiction, but at the pace that AI technology is progressing, it is a real possibility. AI has already become able to do things that were once thought impossible.

If this were to happen, humans could have significant ethical obligations regarding the way they treat AI. Would it be wrong to force an AI to accomplish the tasks that it was designed to do? Would we be obligated to give an AI a choice about whether or how it was going to execute a command? And could we ever potentially be in danger from an AI?

Also see: How AI is Altering Software Development with AI-Augmentation 

Key Steps for Improving your Organization’s AI ethics

The ethical challenges surrounding AI are tremendously difficult and complex and will not be solved overnight. However, organizations can take several practical steps that toward improving their organization’s AI ethics:

    • Build awareness of AI ethics within your organization. Most people have either no familiarity or only a passing familiarity with these issues. A good first step is to start talking about ethical challenges and sharing articles that bring up important considerations.
    • Set specific goals and standards for improving AI ethics. Many of these problems will never completely go away, but it is useful to have a standard that AI systems must meet. For example, organizations must decide to what degree AI systems must eliminate bias compared to humans before they are used to make important decisions. And they need to have clear policies and procedures in place for ensuring that AI tools meet those standards before entering production.
    • Create incentives for implementing ethical AI. Employees need to be commended for bringing up ethical considerations rather than rushing AI into production without checking for bias, privacy, or transparency concerns. Similarly, they need to know that they will be held accountable for any unethical use of AI.
    • Create an AI ethics task force. The field of AI is progressing at a rapid pace. Your organization needs to have a dedicated team that is keeping up with the changing landscape. And this team needs to be cross-functional with representatives from data science, legal, management, and the functional areas where AI is in use. The team can help evaluate the use of AI and make recommendations on policies and procedures.

AI offers tremendous potential benefits for both organizations and their customers. But implementing AI technology also carries the responsibility to make sure that the AI in use meets ethical standards.

Also see: Best Machine Learning Platforms 

The post AI Ethics: An Overview appeared first on eWEEK.

Москва

Как поучаствовать в продаже иностранных ценных бумаг по указу №844

Четвертый том в серии ко Дню космонавтики

Life On The Green: Jack Nicklaus, golf legends impart wealth of wisdom in Ann Liguori’s new book

Cyprus Closed Chess Championship names winners

Trump trial: Jury selection to resume in New York City for 3rd day in former president's trial

Ria.city






Read also

The most innovative travel companies are focused on this one thing

Maryland Sen. Chris Van Hollen must stop ‘appeasing and coddling’ Hamas terrorists | GUEST COMMENTARY

Elephant Rumblings: Miller turning heads, Hernaiz optioned

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Cyprus Closed Chess Championship names winners

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Danielle Serdachny scores OT goal to lift Canada to 6-5 win over US in women’s hockey world final



Sports today


Новости тенниса
Уимблдон

Рыбакина призналась, что её жизнь сильно изменилась после победы на Уимблдоне



Спорт в России и мире
Москва

На окружном полуфинале конкурса «Это у нас семейное» Калмыкию представляют четыре семьи



All sports news today





Sports in Russia today

Москва

Прием заявок на участие в конкурсе на лучшее путешествие по Дальнему Востоку начнется в мае


Новости России

Game News

Amanita Design выпустила Pilgrims на iOS и Android в обход Apple Arcade


Russian.city


Киев

Владимир Зеленский подтвердил ракетный удар по аэродрому в Джанкое


Губернаторы России
#123ru.net

«Телефон доверия»


Шапки женские на Wildberries — скидки от 398 руб. (на новые оттенки)

Гримасы рынка автозапчастей: долго, дорого, возможны подделки

Like FM – федеральный партнер релиза «Идеальная зависимость»

Правительства и законодатели могут закрыть все фермы.


«Вспыльчивый очень»: Лоза объяснил агрессивное поведение Лепса на концерте

Актриса Лужина призналась, что у нее не было романа с Высоцким

Певец Дима Билан показал номер с массажным креслом в челябинском отеле

Леонардо ДиКаприо может сыграть Фрэнка Синатру в новом байопике Скорсезе


Рыбакина призналась, что её жизнь сильно изменилась после победы на Уимблдоне

Александрова проиграла Жабер в первом круге турнира WTA в Штутгарте

Теннисист Рублев разбил ракетку после поражения на турнире в Барселоне

WTA отреагировала на суперкамбэк Елены Рыбакиной



«А потом мир погас». Жертва молнии рассказал о боли, которую едва пережил

На окружном полуфинале конкурса «Это у нас семейное» Калмыкию представляют четыре семьи

Подключение водонагревателя в Московской области

Эксперт Президентской академии в Санкт-Петербурге об эффективных решениях в дорожном строительстве   


«Калмыкия стала точкой объединения народов всей страны»: в Элисте стартовал Первый Фестиваль национальных театров России

Собянин назначил нового главу Стройкомплекса Москвы

Кинопоиск показал тизер-трейлер сериала «Игры» об организации Олимпиады-80

Собянин рассказал о зеленых облигациях


Путин назвал ключевым приоритетом повышение продолжительности жизни в России

ФСБ задержала москвича, распространявшего вирусное ПО в Telegram

Россиянин весом 300 килограмм скончался в реанимации

В Москве рухнули четыре дерева из-за сильного ветра



Путин в России и мире






Персональные новости Russian.city
Comedy Club

Турбо из «Слова пацана» раскрыл правду о личной жизни на съёмках Comedy Club



News Every Day

Cyprus Closed Chess Championship names winners




Friends of Today24

Музыкальные новости

Персональные новости