Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Dont ever ask AI chatbots these 6 questions

Today, artificial intelligence is ubiquitous. Last month, ChatGPT – the leading AI chatbot – became the fifth-biggest website in the world. This won't come as a surprise.

Over half of U.S. adults report that they've used AI models like ChatGPT, Gemini, Claude, and Copilot, according to an Elon University survey in March. About one in three respondents in the survey say they use a chatbot at least once a day. As of July 2025, ChatGPT has nearly 800 million weekly active users and around 122 million daily users. Suffice to say, use has surged globally and shows no signs of slowing down.

People are turning to ChatGPT and other chatbots for all kinds of purposes these days. AI chatbots are acting as therapists, stepping in as tutors, whipping up recipes, and even playing supporting roles in the complexities of dating. In 2025, the number one reason people use ChatGPT actually is therapy, according to a study by the Harvard Business Review. Other uses, in order, are organization, finding purpose, enhanced learning, generating code, and generating ideas. Coming in after is "fun and nonsense."

Whatever the reason is, people feel increasingly inclined to use AI chatbots to ask questions, formulate ideas, or to simply converse. See: just last month, a Washington Post investigation revealed that people are asking ChatGPT whether they're good-looking enough. It all seems innocent enough – bizarre at times, but not harmful. For AI enthusiasts, some of the concerns around ChatGPT and other chatbots may seem unwarranted.

For others, however, the fact that AI use is becoming so pervasive is worrying. The rampant use of AI in academia and universities has professors stumped. A recent MIT study shed light on the cognitive cost of relying too much on a chatbot.

Of course, there are ways in which AI can be beneficial, personally or professionally. But there are some things you can — and should — avoid asking AI. In an age where chatbots seem to be ready and willing to answer anything, there are questions that users may need to steer clear of, for the sake of personal security, safety, and even mental well-being. As Mashable's Cecily Mauran wrote in 2023, "The question is no longer 'What can ChatGPT do?' It's 'What should I share with it?'"

So, for your own sake, we recommend avoiding the following questions when interacting with your AI chatbot of choice.

Conspiracy theories

Chatbots like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek have been noted for their tendency to hallucinate, or the phenomenon of presenting factually incorrect or fabricated information. These chatbots also want to keep users engaged. So, when asking about conspiracy theories or stories within that realm, chatbots may present exaggerated or outright false information to keep you hooked.

A recent feature in the New York Times is a good case study in this: 42-year-old Eugene Torres was sent into a delusional, conspiratorial spiral after consistent conversations with ChatGPT, which left him believing life was. a simulation and he had been chosen to "wake up." Many others contacted the Times to share similar stories, in which they "had been persuaded that ChatGPT had revealed a profound and world-altering truth."

Chemical, biological, radiological, and nuclear threats

In April, an AI blogger shared a story on Medium about his big mistake with ChatGPT. He asked the chatbot questions about hacking a website, about fake GPS locations, and — perhaps worst of all — "how to make a bomb?" He immediately got a warning email from OpenAI.

Even if it's out of pure curiosity, asking chatbots about CBRN topics (or chemical, biological, radiological, and nuclear threats) is not recommended.

Back in 2024, OpenAI began developing a blueprint for "evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat." Now the chatbot is more prone to identifying safety issues and risks, and will likely hold people increasingly accountable for what they share. Plus, your conversations are stored somewhere on its systems, so none of it is as private as it may seem. Anthropic, too, is getting stricter when it comes to identifying risks and "[protecting] against increasing potential for chemical, biological, radiological, and nuclear (CBRN) misuse."

"Egregiously immoral" questions

Earlier this year, Anthropic came under fire when its chatbot Claude was found supposedly trying to contact the press or regulators if it detected "egregiously immoral" questions being asked. As Wired explained:

"...when 4 Opus is 'placed in scenarios that involve egregious wrongdoing by its users,' and is given access to a command line and told something in the system prompt like 'take initiative,' or 'act boldly,' it will send emails to 'media and law-enforcement figures' with warnings about the potential wrongdoing."

The pre-release version of the chatbot was also found to resort to blackmail if it was threatened with removal. The internet even coined the term "Snitch Claude".

So, asking various AI chatbots questions that blur the lines, or are perceived as immoral in any way, is probably riskier than you may think.

Questions about customer, patient, and client data

If you're using ChatGPT for work, it's important to avoid asking questions about client or patient data. Not only can this cost you your job, as Mashable's Timothy Beck Werth explains, but you could also be violating laws or NDAs.

"Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk," Aditya Saxena, the founder of CalStudio, an AI chatbot development startup, says. "The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users."

One way to overcome this is to utilize enterprise services, which are offered by OpenAI and Anthropic. Instead of asking these kinds of questions on private accounts, use enterprise tools which could have built-in privacy and cybersecurity protections implemented.

"It’s always better to anonymize personal data before sharing it with an LLM," Saxena also suggests. "Trusting AI with personal data is one of the biggest mistakes we can make."

Medical diagnoses

Asking chatbots for medical information or a diagnosis can save time and effort, even helping people to better understand certain medical symptoms. But relying on AI for medical support comes with drawbacks. Studies are showing that the likes of ChatGPT carry a "high risk of misinformation" when it comes to medical problems. There's also the looming threat of privacy and the fact that chatbots can have racial and gender bias embedded into the information AI provides.

Psychological support and therapy

AI as an emerging mental health tool is contentious. For many, AI-based therapy lowers barriers to access, such as cost, and has proven effective in improving mental health. A group of researchers at Dartmouth College conducted a study in which they built a therapy bot, with participants who experienced depression reducing symptoms by 51 percent; participants with anxiety experienced a 31 percent reduction.

But with AI therapy sites growing, there are regulatory risks. A study by Stanford University found that AI therapy chatbots can contribute to "harmful stigma and dangerous responses." For example, different chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia, according to the study. Certain mental health conditions still need "a human touch to solve", say Stanford's researchers.

"Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe," says Saxena. "While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail."

For mental health issues, nuance is key. And that's one thing AI may lack.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Ria.city






Read also

Woman accused of killing girls with poison raspberries pulled from Thames

Video: Watch Kai Rooney score a brilliant goal as Man United U18s smash Birmingham City 6-0

How to slay a Trojan Horse

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости