A robot commits libel. Who is responsible?

Peter Georgiev

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted.”

This is how Xinhua News’ artificial intelligence presenter announced itself to the global audience at the World Internet Conference in November. Modeled on a real anchor Zhang Zhow, the virtual newsreader is said to be the first of its kind, according to China’s state news agency. But signs that automated journalism will soon play a central role in the news media industry have long been there.

For news organizations, algorithms generating compelling narratives are an exciting prospect. Many would have raised an eyebrow when the Associated Press started relying on automation to cover minor league baseball and transform corporate earnings into publishable stories. Fast forward a couple of years and now it seems almost impossible to find a major news outlet that is not experimenting with their own robot reporter.

From a business perspective, that makes complete sense. News bots are convenient, cheap and don’t complain when asked to produce an article at 3 a.m. on a Saturday. Most of all, they are quick. In 2015, NPR’s Planet Money podcast set up a writing contest between one of its journalists and an algorithm. Spoiler alert: the algorithm won. It wasn’t even close.

Yet, for all their apparent infallibility, bots, like their human predecessors, are also vulnerable to mistakes. In the news business, one of the worst mistakes is committing libel. So, how should courts treat cases in which a robot generates a defamatory statement? Legal and tech experts believe now is the time to decide.

Thanks to a series of landmark rulings by the U.S. Supreme Court in the second half of the previous century, the First Amendment provides strong protection to journalists in defamation lawsuits. Public officials can’t recover damages for libel without first proving that the defendant had acted with “actual malice” — knowing that a statement was false or demonstrating reckless disregard for the truth.

“That just doesn’t work very well with an algorithm,” says Lyrissa Lidsky, dean of University of Missouri’s School of Law and an expert in First Amendment law. “It’s hard to talk about the knowledge that an algorithm has or whether an algorithm acted recklessly.”

Bots don’t make conscious choices when producing content. They behave on the basis of human-written code. Yet, programmers may not always be able to predict every single word of a story or its connotation, especially when machine learning is involved.

“As these cases start to arise and be litigated, there’s going to be a lot of education of the public about how algorithms work and what choices are made in designing algorithms,” Lidsky says.

While a bot cannot act with actual malice, its designer can. Robot reporting may appear to be impartial and objective, but humans often build their own biases into automated systems. This poses potential risks to publishers.

While a bot cannot act with actual malice, its designer can. Robot reporting may appear to be impartial and objective, but humans often build their own biases into automated systems. This poses potential risks to publishers.

“News organizations are going to have to be really careful about who it is that they are hiring to engage in these kinds of tech development areas,” says Amy Kristin Sanders, an associate professor at the University of Texas at Austin.

“In some instances, you’ve seen news organizations say, ‘We don’t really understand the technology, but we think it’s useful, we think it’s cool.’ That’s not a defense.”

Sanders is one of three researchers who co-authored a recent study highlighting the complicated manner of determining fault when an algorithm is accused of committing libel. Still, she believes that in many ways these cases are no different than product liability cases.

“There’s not one person who is responsible for designing a can opener, let’s say. And so, the law has found ways, if a can opener malfunctions and harms someone, to account for that.”

Like a can opener, crafting an algorithm usually requires the effort of multiple people. The reader wouldn’t find their names in the byline of an AI-generated article in contrast to one written by a real person. These circumstances make it more difficult to assign personal blame.

Pointing fingers doesn’t seem to be that helpful anyway. In any case, the news organization itself would likely be the one held accountable for spreading a falsehood. So, publishers should be primarily thinking about ways to avoid such situations in the first place.

This is no less important for American content creators. Despite enjoying protection by the First Amendment domestically, they may find it challenging to prove their innocence elsewhere.

The European Union has put pressure on U.S. tech giants to remove content from their platforms and take better care of user data. Democracy may be a core value for both Americans and Europeans, but they have their differences when balancing between safeguarding reputation and promoting freedom of speech.

“You would think based on the way the balance is struck in other countries that they would be more likely to hold news organizations responsible for bot-driven libel cases,” Lidsky says.

To shield themselves, publishers may need to reaffirm their belief in human judgment. To some extent, algorithms can replace the reporter. They shouldn’t replace the editor.

“We’ve seen major news organizations like the New York Times slim and trim their copy desks, and get rid of that layer of copy editing. That can’t be happening. That is a news organization’s first line of defense against a lawsuit,” Sanders says.

Radical transparency is another necessary step for media outlets, according to James Gordon, senior editor at the Reynolds Journalism Institute (RJI) at the University of Missouri.

“Publishing your code, making it publicly available, stating the intention of the code – that’s very important.”

Gordon says the news industry should be cautious when revolutionizing journalism rather than following the example of Silicon Valley giants.

“‘Move fast and break things’ is great unless you’re on the margins of society or someone who is impacted negatively by these technologies, products and services,” he says.

Related Stories

Expand All Collapse All
Comments

Comments are closed.