With the rapid development of artificial intelligence tools, the challenge for journalists — and the imperative — is to stay ahead of the curve, according to a panel of industry experts hosted in late June by the Society for Advancing Business Editing and Writing (SABEW).
The panel shared examples of how to use AI within a journalist’s day-to-day workflow ethically to supercharge backgrounding and research efforts. The panelists included Kylie Robison, senior correspondent at Wired, and Ben Welsh, a news applications editor at Reuters. The session was moderated by Greg Saitz, an investigations editor at FT Specialist US.
Welsh mentioned how he used the tool to do deep research on a person he was going to interview for a story in which he did not know a lot about. He then used the “deep research” function offered by several popular large language AI models to find out everything about this person on the web, including everything they had published.
“It just turned up stuff that would have taken me a lot longer to dig out of Google, and it gave me a sort of bibliography to work from and to get up to speed quickly on that individual person,” said Welsh.
The deep research tool is available for both Google Gemini and ChatGPT and can give a large report on any topic prompted. Although it can take a longer time to retrieve and review the information provided compared to a shorter and less detailed report — something to keep in mind if on a tight deadline — it can be an effective tool, if guided correctly.
AI isn’t flawless
First of all, it’s important to understand that the technology is still developing and can be flawed. Robison explained that she uploaded hundreds of pages of documents about a lawsuit alleging copyright infringement by Meta’s large language models (LLM) into ChatGPT and asked the tool various questions about the content. She discovered it was pulling quotes that did not exist.
“You just have to realize that the technology you are using is completely flawed in this way, and it’s a probabilistic system, and it wants to answer your question more than it wants to be right,” said Robison. She encourages journalists to try something similar to see for themselves how flawed ChatGPT can be.
Robison advocates that journalists always double-check the sources and information that the tool provides to stay ethical and to ensure that, as a journalist, you are providing facts with supporting evidence.
When computers become a part of the team
Both panelists outlined the importance of remembering that the tool is a computer and not necessarily a human, despite its conversational tone.
“We are all tough-minded journalists, but we are also human and have our susceptibilities to its conversational attitude, and there’s a kind of seductive nature to having that conversation,” said Welsh. “You have to remember you’re not talking to a person or a trustworthy source, you’re talking to a computer.”
Welsh also advocates for journalists to be aware of the questions, or prompts, that they are entering into the tool. While it is challenging to develop the perfect prompt, through trial and error, it can be easier to identify an effective formula to receive what you may be looking for.
Welsh explained that he received the best outcome by having a “jam session” where the tool provides various solutions or information in a numeric list form, and Welsh will reply which ones he did and did not like. The tool will then adjust future answers based on his preferences as its built-in memory uses those past answers to develop a character to best cater to the user. However, as Robison already pointed out, the tool’s desire to cater to the user can also put it at fault.
“These chatbots are known to agree with you. The goal is to have an engaged user, and if the user is being told that their point is stupid, or that it doesn’t agree with you, well, then the user’s going to go to another chatbot,” said Robison.
This reality is something for journalists to consider before using these tools. It’s also important to consider to what extent AI tools should be used at all with certain topics or stories.
Privacy concerns and newsroom ethics
The panelists advised not using AI tools with sensitive or classified information about a source or story.
“You have to be careful: these things aren’t truly private, so if you’re putting important source information into these chatbots, there are real ramifications for it getting leaked,” said Robison “And as far as I know, one of the ways to test the model’s behavior and how good its responses are, is the people within the company getting anonymized chat data.”
While the chat can be anonymized, it can still be at risk of resurfacing and potentially harming the reputation of the company or journalist.
At Reuters, Welsh says there are clear rules and expectations for the use of AI. One of those policies is that it must be disclosed if any AI tools are used, and AI tools cannot be used with photography.
From a reporter’s perspective, Robison explains the importance of communication with the use of AI not only with the readers but also within the newsroom.
“Just be honest with your editor. Hopefully, your editor is part of your team and can have your back, wanting you to be the best reporter possible, said Robison. “That’s important, it’s just telling them when you’re using it, when you want to use it, and having that conversation with them.”
While these tools can help journalists condense their workload, gain background knowledge, and give new ideas or concepts, it is still your job to ensure you are providing the best possible reporting. Developing trust should be a top priority.
“With trust being in such decline with the media, why give them another thing to distrust us for? Why not be honest?” said Robison.






