As a technology journalist and communication advisor focused on the integration of technology, I am consistently motivated to engage in discussions surrounding artificial intelligence and the ethics of media. Currently, many professionals in the media industry are worried about the potential effects AI may have on their careers.
A search on TikTok that combines the hashtags #ChatGPT, #layoffs, and #writers reveals several videos from copywriters and marketing specialists who claim their positions were terminated to make way for AI-driven technology. There are also writers asserting that while AI won’t eliminate jobs, it will necessitate a shift towards collaborating with it. The question arises: Is ChatGPT ethical in the realm of media? And what about AI in general?
My viewpoint has consistently been that the role of AI is to enhance human capabilities rather than to substitute them.
Machines cannot learn
To grasp why AI cannot (or should not) take the place of humans, it’s essential to comprehend the fundamentals of machine learning. The reality is, machines do not genuinely learn.
David J. Gunkel, Ph.D., serves as a media studies professor in the Communication Studies department at Northern Illinois University and authored An Introduction to Communication and Artificial Intelligence.
“Machines don’t learn the way we typically think of learning; it’s a term that was coined by computer scientists who were attempting to find a way to articulate basically applied statistics, if you want to get very technical,” Gunkel explains. “What large language models and various machine learning systems do is establish a neural network, which is fashioned after a basic mathematical interpretation of how the brain and neurons operate.”
Essentially, machines analyze vast datasets and form predictions based on detected patterns. Occasionally, the outcomes may miss the mark. For instance, while drafting a policy and procedure manager for a client, I inquired about their corrective action policy. They consulted an AI, which recommended that management carry out a “root cause analysis to identify the root factors that led to the issue. This study can help uncover the specific modifications required to ensure the problem does not happen again.”
I ended up drafting the policy independently.
Usage of AI tools in journalism
OtterAI
Jenna Dooley, the news director at WNIJ, an NPR affiliate in DeKalb, Illinois, notes that her reporters have utilized OtterAI, an online tool that records and transcribes audio files automatically, for years, which has significantly alleviated their workload.
“Before the advent of AI, whenever you returned from an interview, whether it was a short 10-minute discussion or an extensive two-hour session, it was recorded on a tape,” Dooley recounts. “You would have to ‘log the tape,’ which meant sitting down in real-time, listening to snippets, typing them out, and repeating this over and over to manually transcribe the interview.”
“Logging tape was inherently slow, and you could not even commence writing your article until you finished transcribing,” she adds. “Now, it’s far more efficient to access the transcript and select the exact lines or quotes you wish to incorporate.”
YESEO
WNIJ also employs a tool named YESEO, which was created at the Reynolds Journalism Institute (RJI). YESEO is an AI application within Slack that analyzes articles and provides keyword and headline recommendations.
Ryan Restivo, an RJI fellow who developed the tool, explains that the idea originated while he worked at Newsday and observed that some stories did not rank on Google’s first page. Recognizing that other newsrooms likely had superior search engine optimization (SEO) strategies, he aimed to create a tool that could better connect journalists with their audiences.
“We analyzed [the reasons behind our ranking issues] and constructed a Google sheet that compared the actions of our competitors with what we were doing,” Restivo recalls. “We found that we were missing essential information that could optimize our visibility in search results… that’s where the concept was born.”
YESEO stands out because it was designed by a media professional specifically for others in the field, ensuring that media ethics were at the forefront of its development. One critical aspect considered during the app’s creation was ensuring data privacy within newsrooms. The tool utilizes OpenAI’s application programming interface, which permits business transactions to incorporate extensive language models like GPT-3 into their own software. Restivo made it a priority to ensure that submissions from newsrooms would not be utilized for training the AI unless YESEO explicitly opted in.
“When I’m contemplating the privacy ramifications regarding unpublished stories that hold immense value and that no one wishes to share, alongside all the other submissions entering the system, I am committed to safeguarding individuals’ data at all costs,” Restivo states.
Effects of AI on human writers
This month, TikToker Emily Hanley shared a video claiming that ChatGPT took her job as a copywriter and that she was offered an opportunity to train AI to eventually replace herself.
Grace Alexander, a dedicated copywriter, has seen clients move away from her to opt for AI services. She typically has a range of clients, but in May, one unexpectedly terminated their contract as they wanted to experiment with AI-generated content.
“The company I was working for replaced nearly all freelancers and shifted everything in-house, stating, ‘Oh, we can just utilize ChatGPT,’” Alexander recalls.
Gunkel believes that the trend of staffing cuts will not remain a lasting change.
“I suspect they will be bringing many of these workers back in varied roles,” Gunkel observes. “The forward-thinking approach will involve creating highly effective human-AI partnerships that collaboratively contribute content for publications.”
This outlook might hold true. Although Alexander was out of work for the entirety of June, her previous employer appears to desire to reintegrate the human element into their operations.
“They had me on hold for a month,” Alexander states. “They’ve already begun inquiring if I am free for July. It seems I will likely be returning to my position.”
Is ChatGPT and AI ethical?
It’s probable that media organizations will adopt some form of AI in the foreseeable future. However, the ethical implications of utilizing AI are largely unexplored territory. Dooley suggests that newsrooms might find it beneficial to establish an ethical framework for AI usage.
“I recently encountered an ethics policy put forth by the [Radio Television Digital News Association],” Dooley notes. “Similar to how we maintain a code of ethics for our news reporting, there was a recommendation to formulate an [AI ethics code] within a newsroom.”
One crucial aspect to consider is transparency. The Houston Times features a section on their website that outlines the methods and timing of their AI tool usage for content generation.
This contrasts with “pink-slime” outlets, which pretend to be local news platforms to advance particular political candidates or policies. The owner of Local Government Information Services, a pink-slime outlet based in Illinois, disclosed to the Columbia Journalism Review that many of their media platforms utilize software to algorithmically create the majority of their stories based on local data.
“Regrettably, we are likely to witness a surge in this practice, as algorithms simplify the process of producing such content, making it significantly less labor-intensive,” Gunkel remarks. “This will lead to a wealth of aggregated material that is challenging to trace back to the original sources… alongside an increase in the dissemination of disinformation and misinformation.”