The issue of real and fake news will become more important as Silicon Valley platforms like Google, YouTube, Facebook, and Twitter become central to the news cycle, placing generated fake news alongside fact-checked, legitimate content. And it gets harder to differentiate between real and artificially generated posts. Video coverage, particularly live video, was viewed at one point as a way to cut through this, appreciated for its unfettered, instant, visceral, and immersive properties. As we have seen, it has proven to be particularly effective in citizen journalism.
But even live video and images will soon have the potential to be doctored. In the past two years, researchers have made massive advances in manipulating pictures, images, and sound with machine-learning, artificial-intelligence programs. These can continually refine their output. Initial success suggests that within the next decade, highly sophisticated moving and still visual output might be possible, making it difficult to believe our eyes, as well as what we read.
The issue of fake news is already a pressure point for Silicon Valley tech giants, as brands and big media companies withdraw advertising following the discovery of those ads being placed next to extremist or fake content. But this has only been discussed in the most overt and controversial cases. What about the grey space in between? Branded features and partisan “news” content. Or inaccuracies in AI-generated written news reports. Or, even more gravely, the content that affects public opinion in elections using online propaganda, as has been done by Russia in the run-up to both the American and French elections, as revealed by intelligence reports. The British government has also asked Mark Zuckerberg for evidence of whether Russia-backed accounts affected the EU referendum and its general election.
Fake news is becoming a key political issue. French advertising giant Havas, as well as the UK government, in partnership with the Guardian, BBC, and Transport for London, all withdrew ads from Google in 2017, following Google’s lack of guarantees when it comes to ad placement. Havas attributed the withdrawal to Google’s inability to “provide specific reassurances, policy, and guarantees that their video or display content is classified either quickly enough or with the correct filters.”
France has taken action a step further. In January 2018, President Emmanuel Macron announced a new law to combat fake news. During elections, social media would face tougher rules over the content that they allow online. And deliberate attempts to blur the lines between truth and lies were undermining people’s faith in democracy. Macron’s new rules include tougher regulation about showing the sources of apparent “news” content, and limits on how much could be spent on sponsored news material.
In his announcement, Macron talked about the lowered cost of such activity; he said it was now possible to propagate fake news on social media for just a few thousand euros. (Which rather puts the scale of Russia’s $1.25 million per month during the lead-up to the U.S. election, and its potential impact, in perspective.) “Thousands of propaganda accounts on social networks are spreading all over the world, in all languages, lies invented to tarnish political officials, personalities, public figures, journalists,” he said.
But there’s a question of how to resolve these tensions. In the wake of revelations about Facebook staff curating its newsfeed in 2016 with more liberal stories, what was more concerning to Microsoft’s Danah Boyd was not that the feed was curated, but the widely held misconception by consumers that an algorithm’s sorting and listing information would be less biased than humans doing the same thing. That a code was somehow more neutral than a human in curating a newsfeed of stories. In a May 2016 piece on Data & Society’s platform dubbed “Facebook Must Be Accountable to the Public,” also published by the Huffington Post, she wrote: “What is of concern right now is not that human beings are playing a role in shaping the news—they always have—it is the veneer of objectivity provided by Facebook’s interface, the claims of neutrality enabled by the integration of algorithmic processes, and the assumption that what is prioritized reflects only the interests and actions of the users (the ‘public sphere’) and not those of Facebook, advertisers, or other powerful entities.”
She added: “There was never neutrality, and never will be... I have tremendous respect for Mark Zuckerberg, but I think his stance that Facebook will be neutral as long as he’s in charge is a dangerous statement. This is what it means to be a benevolent dictator, and there are plenty of people around the world who disagree with his values, commitments, and logic. As a progressive American, I have a lot more in common with Mark than not, but I am painfully aware of the neoliberal American value systems that are baked into the very architecture of Facebook and our society as a whole.”
The Facebook and napalm photo incident, while well-intended, revealed the cultural blindness in Silicon Valley’s perspective.
Which comes down to a paradox as Silicon Valley captures more and more control of the news cycle and online speech. If we expect them to take more control of content online, protect us from fake news, from abuse, what guidelines do we set? Is it better if an algorithm does it, or if a Facebook-selected group, with their own personal biases, does it? Then there’s the question of what, really, is Facebook. A media company? A social network? Because that has implications for the responsibility it holds.
(Excerpted with permission from Silicon States, Lucie Greene, published in India in 2019 by Harper Business.)