{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Seven tips for talking to children and young people about generative AI

Bricolage/Shutterstock

For most of us, generative AI (GenAI) has moved from novelty to everyday infrastructure astonishingly fast. Many adults now use tools like chatbots at work or casually, and many children are already encountering them through homework “help”, entertainment, or social sharing.

Unsupervised use of generative AI can expose children and young people to confidently presented misinformation, manipulative “keep chatting” dynamics, and inappropriate or emotionally risky content. The tone and conversational dynamics of many chatbots can encourage secrecy and over-reliance, or mimic authority without real understanding or duty of care. In school contexts, GenAI can quietly undermine learning, turning homework and writing into shortcuts rather than skill-building.

I’ve helped create new school resources on GenAI, including guidance for parents. But the most effective safety measures still depend on adults setting boundaries, modelling critical thinking, and staying close enough to a child’s digital life to notice what’s changing in it. What follows are some practical ways to talk about, assess, and limit younger people’s GenAI use.

1. Begin with curiosity – not crackdowns

If you start by telling a child that they shouldn’t use GenAI, you may prompt secrecy about their current and further uses. A better opener could be a simple request to demonstrate to you the AI tools or uses they’re familiar with. Ask what they like about it, what it helps with, and what they’d never use it for. The initial aim should be to normalise discussing AI, though not to normalise unrestricted use.

From here it’s easier to acknowledge that these are powerful and intriguing tools, but not a person or an authority, and not without risks and necessary considerations.

2. Don’t treat stated age limits as optional

An awkward reality that parents may currently have missed is that many popular AI services set 13 as a minimum age (with parental permission under 18). OpenAI states that ChatGPT “is not meant for children under 13”, and still requires parental consent for ages 13 to 18. The AI chatbot ecosystem is inconsistent, however. Anthropic requires Claude users to be 18+, explicitly citing heightened risks for younger users. Google, meanwhile, allows supervised access to Gemini for under-13s via parent-enabled controls.

Your practical rule should be to treat age limits as a clear safety signal rather than a box-ticking exercise. If a service says “13+” or “18+”, that’s telling you something about risk, content exposure and the likelihood of harm from unsupervised use by young people.

3. Encourage fact-checking

Children (and indeed plenty of adults) can mistake confidence for correctness. When talking about GenAI with children, emphasise that AI chatbots can and regularly do “hallucinate”. They invent plausible-sounding details and mix fabrication with fact. Understanding that their speedy and well-stated responses come at a cost of large and small inaccuracies is key.

Encourage young people to check what GenAI tells them. Pheelings media/Shutterstock

Encourage verifying anything important – news, health claims, law, school facts, statements that may be repeated as “true”.

4. Help them know when to stop

Large language models (LLMs) are designed to keep conversation flowing. They compliment, encourage, reassure and suggest what to do next. This may be helpful for brainstorming but it’s potentially dangerous for emotionally loaded topics where a young person is vulnerable, impressionable, or isolated.

Recent litigation around “companion” chatbots has alleged that vulnerable young users were pulled into harmful spirals, including self-harm risk and secrecy from parents. These are complex and unfolding cases, but they are serious enough to treat as a major warning sign about unsupervised, open-ended AI conversations for minors.

Parents and teachers should name a firm boundary: no chatbot is a counsellor, therapist, or trusted confidant. If a conversation becomes sexual, self-harm related, frightening, or intensely personal, the rule should be to stop and speak to a trusted adult.

5. Don’t feed the machine personal data

Young people often understand privacy better when it’s framed as something tangible. Some rules: don’t share a full name, address, school, phone number, or identifiable photos. Don’t upload private documents or screenshots. Don’t paste in other people’s personal information. If you wouldn’t post it on a public noticeboard, don’t paste it into a chatbot.

6. AI should support the work, not do the work

GenAI poses an educational risk that deserves far more attention: cognitive off-loading. This happens when the tool performs the thinking step – the learner may finish faster, but will learn less. Research is increasingly linking heavier AI reliance with reduced critical thinking and lower cognitive effort, with off-loading and automation bias proposed as mechanisms. A practical way to explain this to young people is that “AI can help you learn, but it can also help you avoid learning”.


Read more: How generative AI is really changing education – by outsourcing the production of knowledge to big tech


If you’re helping with homework, allow the use of GenAI for asking for an explanation in simpler terms, or requesting feedback on a draft. Don’t allow writing the essay, answering the homework questions directly, or producing a solution that the student can’t explain.

7. Make AI use visible and social

Where AI use is permitted, aim to reduce secrecy. Use AI in shared spaces at home. Set agreed times, not late-night private use. Coordinate with other adults: parents should share their concerns and approaches with other parents and with school staff.

We should treat Generative AI as we wish we’d treated social media much earlier - not as just another app, but as a behavioural technology that shapes attention, learning, confidence and relationships. Being AI aware is not about panic, but about adults building enough knowledge and confidence to guide children toward safe, age-appropriate, genuinely educational use, while regulation and curriculum development catch up.

Dónal Mulligan has received research funding from the EU's Erasmus+ program. He is affiliated with Webwise, the Irish Internet Safety Awareness Centre, and with Media Literacy Ireland.

Ria.city






Read also

'Great way to go to jail': Hegseth's 'rules of engagement' threat sets off alarms

Why Vasseur was surprised by F1 2026 starting procedure complaints

Harry Styles Concert Special Set at Netflix, Will Play New Album in Full

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости