{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27
28
29
30
31
News Every Day |

How AI Is Creeping Into ‘The New York Times’

On Sunday, a writer named Becky Tuch posted an excerpt on X from a months-old New York Times “Modern Love”column that had given her pause. “I don’t want to falsely accuse writers” of using AI, she wrote. “But this reads EXACTLY like AI slop.” The excerpt—from an essay by a mother who had lost custody of her son—described the son’s feelings, at one point, toward his mother: “Not hate. Not anger. Just the flat finality of a heart too tired to keep trying.”

Among the 100-plus replies to Tuch’s post was one by an AI researcher, Tuhin Chakrabarty. He’d run the snippet from “Modern Love” through an AI-detection tool from the start-up Pangram Labs, which flagged it as likely having been AI-generated.

I learned about the incident from Chakrabarty, a computer-science professor at Stony Brook University. I’d previously written about his efforts to quantify the proliferation of AI in novels self-published on Amazon. After commenting on Tuch’s post, he plugged the whole column into the Pangram AI detector. The program estimated that more than 60 percent of it was AI-generated. I ran the column through four other AI-detection tools: Two of them flagged 30 percent of the work as likely AI-generated, one found no AI, and one suspected AI but offered no percentage.

Kate Gilgan, the author of the column, told me that she hadn’t copied and pasted language from an AI model into her work. “However, I did utilize AI as a tool,” she added, seeking “inspiration and guidance and correction.” She said she’d prompted various products (including ChatGPT, Claude, Copilot, Gemini, and Perplexity) to help her stay on topic in a paragraph, for example, or stick to a theme. “I used AI as a collaborative editor and not as a content generator,” she said. In response to questions about the column, a New York Times spokesperson noted that the paper’s contracts require freelancers to abide by its ethical-journalism handbook, which mandates that AI use “adhere to established journalistic standards and editing processes” and that “substantial use of generative A.I.” be clearly disclosed to readers. Asked for comment on whether Gilgan’s AI use rose to the level requiring disclosure, the spokesperson said in an email: “Journalism at The Times is inherently a human endeavor. That will not change. As technology evolves, we are consistently assessing best practices for our newsroom.”

Whatever the extent of Gilgan’s dependence on AI—detection tools are imperfect—her acknowledgment is the latest evidence of a phenomenon that people have been whispering about online for a long time: Artificial intelligence has already infiltrated prestigious media outlets and publishing houses. Last week, Hachette made national headlines when it decided to cancel the publication of a novel, Shy Girl, that appeared to include AI-generated text, which readers had identified ahead of its American release. (The novel had previously been published in the United Kingdom and is now being discontinued there. The author told the Times that she had not used AI to write Shy Girl, but that an acquaintance who’d edited an earlier version of the novel had done so.) Last spring, the Chicago Sun-Times and The Philadelphia Inquirer were caught publishing a syndicated summer-reading guide featuring nonexistent novels; a freelancer had made it using ChatGPT. Besides those high-profile incidents, people have been posting for months about suspicions of AI turning up, undisclosed, in major news publications—far beyond personal essays or puffy summer features.

[Read: At least two newspapers syndicated AI garbage]

A note of caution: One challenge with AI detection is that the tools involved, much like the models they analyze, are still evolving. Sometimes they flag false positives or fail to catch AI-generated material. Pangram’s CEO, Max Spero, acknowledged that both happen. He also warned that the percentage of AI material in a text is difficult to estimate with certainty; an article riddled with AI tells could be flagged as fully AI-generated even if it also includes some human-written text. Different detection tools give varying results.

Jenna Russell, a doctoral candidate in computer science at the University of Maryland, has been following various social-media firestorms. Often, someone will paste a screenshot from a work that they suspect contains AI material, a commenter will run it through an AI detector and post the results, others will pile on to express outrage, and then everyone will just move on. Wondering how common AI use really was, Russell and six other researchers set Pangram on thousands of articles, and found that it flagged likely AI use across the U.S. press—including in the opinion sections of The New York Times, The Wall Street Journal, and The Washington Post—suggesting that writers are turning to AI more than their readers might believe. (Although the researchers focused on opinion articles in the big publications, they also studied a small number of their news stories; among those, far fewer were flagged for AI-like language.) In October, Russell and her colleagues published a preprint of their research, which is not yet peer-reviewed; several Pangram researchers, including Spero, are co-authors.

All three of those national newspapers have posted information about their AI policies, noting that they permit some use but prioritize being transparent about it. A spokesperson for the Journal’s parent company, Dow Jones, declined to comment for this article. (I’m a former Journal reporter and have also written and edited for the Times on a freelance basis.) In response to questions about its stories, a spokesperson for the Post said, “Our editing process includes working to establish the authenticity of everything we publish.” (The Post also creates AI-generated podcasts, so it isn’t entirely clear what their definition of authenticity is.)

The Post had tested three articles I asked about and told me that it had found lower AI likelihood through Pangram than the researchers did; one ranked as “fully human written.” Other detection tools suspected even less AI use in most cases. Spero told me that the current iteration of Pangram, which the Post used, was designed to be more conservative than the previous version (used in their research) in flagging material as AI-generated, partly for fear of spreading false accusations. But he also said that when he and Russell reran their data set of opinion articles through the current version, the underlying assessments were similar to those in the earlier iteration, including with regard to the Post. (Chakrabarty checked the “Modern Love” column with the current version of Pangram.)   

Regardless of the exact numbers, the fact remains: Some of the most trusted publications in the United States have been publishing opinions—under real people’s names—that appear to include text generated with AI models. As AI slop has become a fixture of all kinds of online spaces—our internet searches, our social-media feeds, our online bookstores—major newspapers have been seen by many as a protected space, in which AI-generated content would rarely (or never) appear undisclosed. The newspapers that have survived the onslaught of the internet have benefited from the shared assumption that they can be trusted. The stakes of a broken social contract could not be higher, and they go far beyond the risk of a smooth-brained writing style.

[Read: How to guess if your job will exist in five years]

When opinion articles or personal essays are published in major papers—sometimes with big names attached to them—they can influence societal beliefs and, in turn, the policies of governments or corporations. It has seemed fair to assume, historically, that those opinions reflect the voices and beliefs of the individuals whose names are attached to them. But AI language is something else entirely. Research has found that AI output is much more homogenous than human language. Major AI companies have also acknowledged that their models can be skewed—for example, toward certain cultural and political beliefs. Analyses of the Grok chatbot have found that its language often mimics that of the man behind its development, Elon Musk.

Multiple studies, including those from AI companies themselves, also demonstrate that AI output is unusually persuasive, to the point of getting people to change their minds about political issues or candidates. A world where some self-published romance novels include synthetic turns of phrases and plot points is upsetting. One where AI models’ language and perspectives creep, undisclosed, into the pages of major newspapers—and therefore into public life—is terrifying.

The good news is that we can do something about this. Publications can design clear policies about AI use and disclosure and require that staffers and freelancers abide by them, including by explicitly listing the requirements in contracts. This isn’t a stretch: Many contracts require, for example, that contributors promise not to plagiarize. (The Atlantic requires contributors to attest to being “the sole author” of their article, and forbids AI-generated writing or imagery without approval and disclosure.)

In addition, editors could receive training in identifying AI tells by sight; they could also use detection products. Then they could follow up with writers whose work raises questions (while avoiding jumping to conclusions based only on an editor’s suspicions or a software scan). Those who violate a publication’s policies could face legal or other penalties; as with plagiarizing, using AI without disclosing it would incur significant social and professional costs. Governments, too, could enact policies to rein in failures of disclosure: Legislators could legally require it in certain contexts, for example, though enforcement would surely raise free-speech challenges.

Another remedy could be for major AI companies to take some responsibility for the problem by “watermarking” their products’ output, making it easier to spot. The Journal reported in 2024 that OpenAI had built a tool that could detect AI text with up to 99.9% certainty, but hadn’t released it; one apparent factor, according to the Journal, was a survey in which some users “said they would use ChatGPT less if it deployed watermarks and a rival didn’t.” Asked for comment, an OpenAI spokesperson shared a blog post pointing out other obstacles; “bad actors” could circumvent it, for example. When I asked Chakrabarty about watermarking, he noted the technical difficulties but also raised a more existential question: “Why would Anthropic or OpenAI do it, when the whole business model is based on convincing people AI language is humanlike?”

Ria.city






Read also

How To Watch The 2026 iHeartRadio Music Awards Without Cable

Journalist tells San Francisco board ‘Nick Shirley’s coming to town’ amid fentanyl, crime concerns

Stephen Colbert’s next move after CBS? Writing a Lord of the Rings film

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости