WASHINGTON ― Before the scandal broke over their improper use of Facebook user data, British consulting firm Cambridge Analytica and its parent, SCL Group, plied their trade of political subterfuge for clients in developing countries.
Call it techno-colonialism — the idea that the leaner parts of the world are where you go to conduct your beta tests. Like so much else about Cambridge Analytica and SCL, they were merely putting a sinister flourish on something the tech industry does as a matter of course. The most obvious example comes from the company that most wants to distance itself from Cambridge Analytica: Facebook.
Back in 2015, the social media giant faced a wave of criticism for deploying a new program in India, called Free Basics, that handed out mobile phones to the poor that operated only Facebook products. One of the company’s defenders, venture capitalist Marc Andreessen, blurted out on Twitter, “Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?” (He would later apologize.)
But the kerfuffle didn’t stop Facebook from continuing to treat smaller and developing countries as its corporate testing ground.
In the fall of 2017, Facebook decided to run a test in six countries ― Bolivia, Cambodia, Guatemala, Serbia, Slovakia and Sri Lanka ― that split users’ feeds into two separate streams of posts. One was devoted to updates from family and friends; the other to official pages of organizations like news sites and political figures. No one from Facebook told anyone in those countries the company would be running this experiment. News publishers were left in dark. So were non-governmental organizations, political parties and activists.
The result was a nightmare. News publishers, particularly smaller, independent media, immediately noticed a collapse in traffic to their sites. NGOs that relied on Facebook to communicate with the people they were trying to serve saw their online interactions evaporate. Activists similarly found that their messages had disappeared from the main feed.
Facebook’s goal was to see whether filtering news and politics into a separate feed would increase “meaningful conversations,” a vague metric that has nonetheless guided how the company is changing its algorithm to combat the spread of propaganda on its site. Had the experiment gone well, the product change might have been brought to larger, wealthier nations. But it didn’t go well and Facebook ended it on March 1.
People in the original six countries were left to wonder why they were the guinea pigs. Fabiola Chambi, web editor for the Bolivian newspaper Los Tiempos, told The New York Times that she wanted to know, “Why Bolivia?”
“We don’t have any way to hold [Facebook] accountable either, aside from calling it out publicly,” Stefan Dojcinovic, editor-in-chief of the Serbian site KRIK, wrote in an op-ed. “Maybe that’s why it has chosen to experiment with this new feature in small countries far removed from the concerns of most Americans.”
Filip Struharik, an editor at Slovakia’s Denník N, wrote, “Facebook would not dare undertake such crazy experiments in big countries like Germany. There it would not only face exponentially larger resistance, but would also risk politicians responding with a push for tough regulation.”
Cambridge Analytica and SCL went gallivanting about the struggling world, too ― although unlike Facebook, they were explicit about wanting to affect events on the ground.
The company bragged about running a voter suppression campaign for a political party in Nigeria’s 2007 election, which election observers said suffered from extreme levels of fraud, violence and intimidation. The Guardian reported that Cambridge Analytica was even offered the hacked emails of Muhammadu Buhari, the opposition politician running against their client, President Goodluck Jonathan, in Nigeria’s 2015 election.
The firm’s leaders have admitted to running a campaign in Latvia to heighten divisions between ethnic Latvians and ethnic Russians for political purposes.
An undercover sting by reporters at U.K. Channel 4 revealed the power Cambridge Analytica had in Kenya, where it worked on three successive elections that were marred by a barrage of divisivedigital propaganda and disinformation favoring President Uhuru Kenyatta’s ruling Jubilee Party.
“We have rebranded the entire party twice, written their manifesto, done two rounds of 50,000 surveys,” Mark Turnbull, managing director of Cambridge Analytica’s political division, told undercover reporters in speaking about his firm’s work for the Jubilee Party. “Then we’d write all the speeches and we’d stage the whole thing. So just about every element of his campaign.”
The elections on which Cambridge Analytica worked in African nations shared fear-based themes that would now be recognizable to anyone in the U.S. Kenyans were warned in ads that the election of opposition leader Raila Odinga would lead to an invasion of Ethiopian refugees, attacks from Somalian terrorists and an epidemic of disease. In Nigeria, Cambridge Analytica cooked up a video that claimed electing Buhari would mean “Sharia for all.”
Facebook can try to deflect criticism for aiding the actions of Cambridge Analytica by suggesting its perversion of Facebook data was somehow unique. But the history of the two companies and their like-button colonialism suggests otherwise. Now that politicians and regulators in the U.S. and European Union are paying attention to the abusive practices of both Cambridge Analytica and Facebook, perhaps they could examine how these companies are manipulating people abroad before bringing their tactics home.