This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Artificial Intelligence Will Affect The News We Consume. Whether That's A Good Thing Is Up To Humans.

Artificial Intelligence Will Affect The News We Consume. Whether That's A Good Thing Is Up To Humans.
Journalism is going to be shaped by artificial intelligence, but as with AI in all fields, the consequences will be determined by human choices about how it is developed and used.
Ociacia via Getty Images
Journalism is going to be shaped by artificial intelligence, but as with AI in all fields, the consequences will be determined by human choices about how it is developed and used.

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ― and stories that can’t be tackled by AI are generally missed.

These two scenarios represent two relatively extreme outcomes of how AI will reshape journalism, its ethics and the way the world learns about itself. But what they should really illuminate is that AI works like any other technology. It need not in itself lead to better and more ethical journalism, or worse ― that will instead be determined by the human choices made in how it is developed and used in newsrooms across the world.

The more basic versions of these algorithms and AIs are already here. These decisions will face people who are newsroom leaders today, not 50 years in the future. Last year Financial Times journalist Sarah O’Connor went toe-to-toe against an AI journalist named “Emma” to report a story on wage growth in the U.K.

“Wage growth — the missing piece in the U.K.’s job market recovery — remained sluggish,” wrote one. “Total average earnings growth fell from 2.1 percent to 1.8 percent, although this partly reflected volatile bonuses.”

The other opted for: “Broadly speaking, the U.K. economy continues to be on an upward trend and while it has yet to return to pre­-recession, goldilocks years it is undoubtedly in better shape.”

The former was written by O’Connor; the latter by AI reporter Emma. While at greater length O’Connor’s ability to put the information in broader political and social context shines through, AI is improving at a rapid pace, and for simple articles is already good enough to roughly match quality reporters.


It is easy to blame new technology for the negative consequences it appears to cause, but as we have learned time and again, in reality the blame lies in where it is used.

The consequence of that lies with people running newsrooms: If it is simply used to replace reporters, that not only leaves the current industry weaker but also means robots would replace an entry-level job where reporters learn the basics of their industry before chasing more complex investigations. It is easy to blame new technology for the negative consequences it appears to cause, but as we have learned time and again, in reality the blame lies in where it is used.

Even deeper thought will be required from the companies and engineers developing such algorithms, and the people funding such research. There is a clear commercial market for algorithms that can quickly analyze, for example, stock prices and help financiers make bigger profits on their trades.

A more socially useful algorithm, though, might try to find the next Enron ― automatically analyzing thousands or millions of company filings and spotting outliers, which human journalists or financial investigators could dig into. Such a tool would never be perfect, but could have immense social value ― but while billions in venture capital and Wall Street money goes into the former, who is funding the latter?

Dancers in the play
Lucas Jackson/Reuters
Dancers in the play

When we talk about artificial intelligence in operation today, we are almost always essentially just talking about complex algorithms ― nothing resembling true intelligence, with its consequent ability to make ethical choices or flights of inspiration.

An algorithm is essentially a tool designed to spot patterns, sometimes “learning” about associations as it operates. The result is that algorithms’ actions are not just a consequence of the intents of their creators ― whether to make money, make information more searchable, or decide who to lend to ― but also prejudices contained within the minds of those making them, or the underlying data they are looking at.

In the U.S., a risk-assessment algorithm known as COMPAS is used in many states to determine an offender’s risk of committing further crimes and is used as a tool in determining what sentence ― including how much time in prison – that offender should get. And yet multiple journalistic investigations have found evidence of racial bias in its determinations for otherwise very similar criminals of different races ― and as the algorithm is proprietary and therefore secret, no one has decisive evidence as to why.

Similar patterns have been found in algorithms governing who gets loans, insurance and more: supposedly neutral, unchallengeable and infallible ‘AI’ systems are embedding decades or centuries of real-world prejudices because they can only run down the tracks they have been given.

These situations should help us see that governing the legality and morality of AI and algorithms is at once immensely complex and very simple. On the complex side lies the challenge of handling algorithms that become so complicated even their creators often cannot explain how they work, when such tools are often in the hands of multinational companies with no one government able to assert jurisdiction over it. Working out the regulatory, legal and ethical codes for such diverse algorithms with such diverse purposes is ― by this reasoning ― an effort of staggering complexity.

On the flipside, it is the very neutrality of algorithms ― and of AI as we know it today ― that makes the task simple. At present, anything resembling real intelligence is far beyond the scope of modern AI, meaning such tools are simply the modern equivalent of a train or a factory machine ― if either causes harm through intent or negligence, we blame its operator or owner.

What worked for trains can, for now, work for algorithms. Where we need to take greater leaps of imagination, if we want AI to lead to a better world, is looking at who is building algorithms, for what purpose, and looking at how we fund the development of algorithms for social good rather than just private profit.

This is not a question AI can answer for us. Faced with this question, “Rose” ― one of the most advanced AI chatbots in the world today ― could answer only, “I wish I could explain it to you but I think it is just an instinct.” This is one humanity will have to solve for itself.

This article first appeared in Ethical Journalism Network.

For more content and to be part of the “This New World” community, join our Facebook Group.

HuffPost’s “This New World” series is funded by Partners for a New Economy and the Kendeda Fund. All content is editorially independent, with no influence or input from the foundations. If you’d like to contribute a post to the editorial series, send an email to thisnewworld@huffpost.com

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.