Almost a decade ago, the Associated Press changed the face of journalism by utilizing artificial intelligence to create earnings reports. In the years since, AI has become more mainstream both in popular culture and newsrooms.
According to a 2014 article by TechCrunch that cited a now-disabled AP blog, the media organization planned to increase its coverage nearly 15 times using AI. With the new technology, they scrapped websites, put together strings of sentences and produced simple computer-generated content. With it, they were going to free up staff time to provide more human analysis elsewhere.
The AP teamed up with Automated Insights, an AI platform, then broadened their AI-generated content to sports. In 2016, it was reported they used AI to cover minor-league baseball games. Raw statistics of games were fed to an AI that was able to describe what happened, a Digital Trends article reported.
Since 2016, AI technology has been introduced in many newsrooms, most recently at Buzzfeed. The company said they’re going to use ChatGPT and OpenAI—which have gained popularity for their capabilities recently—to personalize some content while still having an emphasis on human-generated journalism, although it’s reported BuzzFeed laid off 12% of its workforce in December 2022. According to an Insider article, employees aren’t happy.
As companies opt for cheaper, possibly more efficient, yet less human-centered operations in newsrooms, what will it look like? Will it change journalism for the better or worse? We asked an expert: Jonathan Stray, a Senior Scientist at the Berkeley Center for Human-Compatible AI.
Q: What has AI tech looked like in newsrooms in the past?
Currently, a sort of first-generation systems are being deployed, which has sometimes been called AI. I don’t really think we should call them that, they’re more like Mad Libs. So that’s almost all of what’s being done now and has been done so far is like, ‘here’s an earnings report template and we just plug in the numbers.’
So that’s existed and has been used in newsrooms for 10 years. And the newsrooms using it say that they’re not firing people and because of that, they’re producing coverage on stuff they wouldn’t have covered. Nobody covers all of the little league games anymore and there’s no reason to have reporters writing those stories anymore.
Q: What makes this new wave different?
Now there’s a new generation, which is Large Language Model-based. And the problem is they’re not really controllable. They’re going to make mistakes, they’re just not going to be accurate. Even so, some organizations are pushing forward with them. So I think BuzzFeed announced they’re going to use them.
Buzzfeed has said they’re going to use it on the entertainment side for writing quizzes. It could also be the sort of thing where the machine writes, and then you have a copy editor edit it, for example—which might still prove faster than having someone write it from scratch.
Q: What problems do you expect could come from inaccurate AI-generated news and the trust in media?
I see it as an escalation in the arms race. There is another type of problem too, which is that people may just as a response to this, just stop believing what they read and see online, because either they’ve encountered too much garbage, or it’s just too much work to find out whether something is true anymore. And this is a particular problem around misinformation that we’re already seeing.
Q: Do you recommend newsrooms start to incorporate AI into their news gathering and production?
Not really, actually. I think there are specific cases for which it makes a lot of sense. Certainly, the cases for which we already see automation. I mean, a Large Language Model can probably write a more satisfying story about a baseball game or a more interesting stock market report.
I think we should also see them as writing tools. And in particular, they can be good research tools. So I think actually, the biggest use in journalism is going to be for research rather than writing.
I think most of the uses of AI for journalism are going to come in the form of general-purpose tools that everybody uses. There’s going to be general-purpose research tools and general-purpose composition tools.
Q: Will AI threaten journalism jobs?
I don’t expect Large Language Models to displace journalism jobs in the near term. I think they might in the future. So it’s going to be some combination of doing more, and having humans do less, but it’s really not clear to me how that’s going to net out at least in the near future.
I think it may have a larger effect in the 5-10 year horizon, but not immediately, because it’s not high enough quality to generate text. And honestly, writing is only one part of it, right? The research and reporting is still something that it can’t do.
Jonathan Stray is a Senior Scientist at the Berkeley Center for Human-Compatible AI working on recommender systems, which are the algorithms that select and rank content across social media, news apps, streaming services and online shopping. Previously he taught the double masters in computer science and journalism at the Columbia Journalism School and was an editor at the Associated Press.
Check out his 2019 research article, Making Artificial Intelligence Work for Investigative Journalism.
How well can ChatGPT write a business journalism article? Let’s find out.
Here’s what ChatGPT has to say when asked to “write an article for a business newspaper on journalism and ChatGPT.”
In recent years, the field of journalism has undergone a significant transformation due to the rise of artificial intelligence (AI) and natural language processing (NLP) technologies. These technologies have allowed for the development of large language models like ChatGPT, which can generate human-like text and assist journalists in various tasks. In this article, we will explore the relationship between journalism and ChatGPT, and how this technology is changing the way we approach news reporting.
ChatGPT is a powerful tool that can assist journalists in several ways. Firstly, it can help journalists to automate tasks such as fact-checking, which can be time-consuming and tedious. ChatGPT can quickly search through large amounts of data and provide accurate information on a particular topic, which can save journalists a considerable amount of time and effort. Additionally, ChatGPT can generate summaries of news stories, which can be particularly useful for breaking news where time is of the essence.
Another area where ChatGPT can be particularly helpful is in generating headlines. Writing effective headlines is an essential part of journalism, as they can determine whether or not a story is read by a particular audience. ChatGPT can generate several headline options that can be tested and evaluated for their effectiveness, which can ultimately lead to more readership and engagement.
ChatGPT can also assist journalists in generating content for their articles. For example, it can generate quotes, background information, and statistics that can be included in a story. This can be particularly helpful for journalists who are covering a topic they may not be familiar with, as ChatGPT can quickly provide relevant information and help them to write more comprehensive articles.
Despite the benefits of using ChatGPT in journalism, there are also concerns about the technology’s impact on the profession. Some argue that the use of ChatGPT could lead to a reduction in the quality of journalism, as it may be used to generate automated news stories without the critical thinking and analysis that human journalists provide. Additionally, there are concerns about the potential for bias in the data used to train these models, which could lead to biased reporting.
In conclusion, ChatGPT is a powerful tool that can help journalists to automate tasks, generate headlines, and provide content for articles. However, it is important to use these technologies responsibly and not rely on them exclusively, as they can never replace the critical thinking and analysis that human journalists provide. It is also essential to address concerns about potential bias and ensure that these technologies are used ethically and transparently. Ultimately, the key is to find a balance between the use of technology and the essential role of human journalists in reporting the news.