A recent incident at The New York Times reveals the growing presence of artificial intelligence in major media outlets, sparking a debate about the ethical use and transparency of AI in journalism.
On Sunday, writer Becky Tuch posted an excerpt from a Modern Love column in The New York Times that she found suspicious. The piece, written by Kate Gilgan, described a mother's loss of custody of her son with a passage that read: 'Not hate. Not anger. Just the flat finality of a heart too tired to keep trying.'
Tuhin Chakrabarty, a computer-science professor at Stony Brook University, ran the text through an AI-detection tool from Pangram Labs. The tool flagged more than 60% of the column as likely AI-generated. When tested with four other detection tools, results varied, with some flagging up to 30% of the text as AI-generated, while others found no AI involvement.
Kate Gilgan, the author of the column, confirmed that she used AI as a tool for inspiration, guidance, and correction. She employed various AI products, including ChatGPT, Claude, Copilot, Gemini, and Perplexity, to help stay on topic and maintain a consistent theme. 'I used AI as a collaborative editor and not as a content generator,' Gilgan said.
In response to questions about the column, a spokesperson for The New York Times stated that the paper's contracts require freelancers to adhere to its ethical-journalism handbook. The handbook mandates that AI use must 'adhere to established journalistic standards and editing processes' and that 'substantial use of generative A.I.' be clearly disclosed to readers. The spokesperson added, 'Journalism at The Times is inherently a human endeavor. That will not change. As technology evolves, we are consistently assessing best practices for our newsroom.'
This incident is part of a broader trend where AI has infiltrated prestigious media outlets and publishing houses. Last week, Hachette canceled the publication of a novel, Shy Girl, after readers identified AI-generated text. The author denied using AI but acknowledged that an acquaintance who edited an earlier version had done so.
Similarly, the Chicago Sun-Times and The Philadelphia Inquirer published a syndicated summer-reading guide featuring nonexistent novels, created by a freelancer using ChatGPT. These incidents highlight the challenges and ethical considerations surrounding the use of AI in journalism and publishing.
Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.
We respect your privacy. Unsubscribe at any time.
Comments (0)
Add a Comment