{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

The stigma around AI in journalism may be easing, but trust is still fragile

I tend to write about AI from the perspective of the bleeding edge, looking at how journalists and media companies are using the technology to change the way they work, reach new audiences, and transform their organizations. But the reality is that there’s a stigma around using artificial intelligence in the journalism community. In conversations I have with working reporters and editors, there’s clearly still a lot of reluctance, if not outright disdain, for using AI in almost any part of their work.

Looking at recent coverage of journalists using AI, however, you might think some of that disdain is going away. The Wall Street Journal recently profiled how Fortune business editor Nick Lichtenberg uses AI to turbocharge his output, sometimes writing as many as seven stories in a single day. The same day, Wired highlighted how several prominent reporters—including independents like Alex Heath and Taylor Lorenz as well as The New York Times’ Kevin Roose—use AI in various editorial tasks, sometimes in the writing itself. 

With all this, it feels as if a kind of dam has burst, and I don’t think it’s a coincidence that it’s happening at the same time Claude Cowork—which brings incredibly powerful agentic AI to everyone—has transformed the AI landscape. (An interesting aside buried in all this coverage of journalists’ use of AI is that it appears Claude is rapidly becoming what the Mac became among media pros: the platform of choice for creatives who “know better.”)


A cautionary tale in copy and paste

However, if the relationship between journalists and AI has been warming up, it got splashed in the face with a cold bucket of water last week when The New York Times severed ties with a freelance writer who had submitted a book review that was at least partially AI-written. The review by Alex Preston, published in early January, included passages that were nearly identical to Christobel Kent’s review of the same book that was published in The Guardian months earlier.

Preston admitted he used AI to assist in writing his book review, saying that he had “made a serious mistake.” While the incident is certainly a wake-up call for the Times (and not necessarily the first one) about how it communicates its AI policy to freelancers, it’s also a flapping red flag for any newsroom tempted to allow more AI use in their operations. Suddenly, there’s an error that seems to justify all the rules against it. 

That’s why it’s important to confront this directly. The incident steers us back into the dark cave of AI scandals in media—from CNET’s bot-authored service journalism to the made-up book titles in the Chicago Sun-Times’ “summer reading list” last year. It threatens to undermine all the gains many journalists and newsrooms are achieving in productivity, content optimization, and more, and potentially encourages those just taking their first steps with AI to fall back on the easy, blanket rule of “just don’t use it.”

That’s why it’s important to look closely at how AI was used so we can better delineate between good and bad AI use. It’s easy to say there wasn’t enough “human in the loop” (an increasingly unhelpful term)—but where in the loop? With prompting, fact-checking, something else? The whole point of AI is to outsource some human decision-making to sophisticated machines, so rather than pointing out the obvious—that humans need to shape and monitor the process—it’s better to zero in on the specific decisions that AI was asked to make, and whether the human gave the right parameters and restrictions.

When you examine this closely, it definitely appears the answer is no. According to The Guardian story, the two reviews have eerily similar language—so close that it’s difficult to argue against outright plagiarism. Look at these two passages:

  • Original review, published August 21, 2025: “most significantly a song of love to a country of contradictions, battered, war-torn, divided, misguided and miraculous: an Italy where life is costume and the performance of art, and where circuses spring up on wasteland.”
  • Times review, published January 6, 2026: “populate what is ultimately a love song to a country of contradictions: battered, divided, misguided and miraculous. This is an Italy where life is performance, where circuses rise on wasteland.”

Looking at the dates and the unquestionable similarities, we can draw some conclusions. It’s obvious Preston directly or indirectly asked the AI to create text he intended to include in the piece, and not just based on his notes. Given that the two reviews were published four months apart (and considering the typically lengthy editing process at the Times, he likely submitted it much earlier), that’s almost certainly not enough time for the AI’s training data to be updated. Which means the AI tool he used was incorporating web search (aka RAG) to come up with the copy.

This was a mistake. Giving Preston the benefit of the doubt, he may not have deliberately told the AI he was using to synthesize other reviews of the book, and perhaps it grabbed The Guardian review on its own. But he certainly didn’t tell the AI not to do that, which would seem to be an essential part of your prompt if you want to avoid the very plagiarized text he ended up including.

From taboo to tool

It bears repeating: In many—if not most—cases, how you use AI matters more than whether or not you use it at all. That requires acquiring a thorough understanding of these tools’ abilities and pitfalls, being meticulous about the parameters of your prompts, and a willingness to adapt your process continually. It’s an ongoing process, and it needs guardrails—such as “always” and “never” commands to avoid specific problems and (human) fact-checking. Otherwise, you’re playing with a gun that could easily go off.

There are systemic safeguards beyond simple techniques. Whether you’re an independent writer or a full newsroom, it pays to have an AI policy. As a media AI trainer, I of course would encourage investing in training, but I think it’s still objectively a good idea. But most importantly, the trial-and-error that comes with figuring out the boundaries of “good AI” should be kept out of public view if you can avoid it. In the case of AI-assisted writing, developing your prompting and guardrails in a private sandbox is essential.

That may seem obvious, but part of the “magic” of AI is that it creates outputs that seem identical to human-created outputs that have gone through a rigorous process. To the untrained eye, the appearance of competence feels good enough. Unlocking AI’s potential as a partner in writing and journalism means not simply trusting the underlying process, but accepting your role to build it, test it, and adjust it as needed. The more journalists do that, the more the stigma will fade.

Ria.city






Read also

Tottenham fans organise big initiative to boost atmosphere vs Brighton, including a free gift

The Great Pacific Garbage Patch is now a thriving ecosystem

Europe faces looming jet fuel shortage: 'Six weeks of fuel left', energy boss warns

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости