This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Is The Answer To Facebook Google Amazon, More Facebook Google Amazon?

In The Big Nine, futurist Amy Webb lays out a provocative case for why we need to further empower, rather than regulate, the world’s biggest and most powerful corporations.
A file image of Amy Webb.
Elena Seibert
A file image of Amy Webb.

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity is a provocative new book by New York University professor and professional futurist Amy Webb. At a time when the role of big tech in every aspect of our lives has prompted fears that mass automated surveillance, biased algorithms and the unregulated vacuuming up of personal data shall make us poorer, more unequal, and less free, Webb suggests that striking an uneasy alliance with corporations is probably better than relying on national governments to protect us.

Further empowering Google, Microsoft, Amazon, Facebook, IBM, and Apple (or the G-MAFIA as Webb puts it), Webb argues, is humanity’s best bet against regressing into a Chinese-style surveillance state powered by artificial intelligence developed by Chinese rivals Baidu, Alibaba and Tencent.

For a book that seeks to understand the future, it is interesting how rooted Webb remains in the nation state-based, China versus USA, framework of analysis. The limits of this framework are apparent when contrasted with how so-called “American” technology companies have acquiesced to working with the Chinese government. Last year, for instance, Apple agreed to store iCloud data of Chinese customers with a state-owned corporation.

For the latest elections news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

Non-American, Non-Chinese readers will be chargained to find that the only course of action available to them in Webb’s schemata is to pick a side.

This interview has been lightly edited for clarity.

Your book lays out three scenarios: An optimistic scenario in which US tech companies triumph, a pragmatic scenario in which China wins in the end, and a catastrophic scenario in which China wins in the beginning. Which scenario do you think is the most likely?

I’m a quantitative futurist and my job is to use data to model out plausible futures. It is mathematically impossible to figure out how the future will turn out. You can do that in a discreet way like the outcome of an election, even when those elections are complicated, but when it comes to how technology and everyday life will intersect, that’s too many variables.

So that’s why we use data driven scenarios, and there are three types of scenarios — optimistic ones, neutral ones — neutral or pragmatic — and catastrophic ones. I just want to point out that an optimistic scenario does not mean that life in the future is amazing or wonderful. It just means we made the best possible decisions we could today.

Likewise the catastrophic scenario means we saw all of the evidence and data but we chose to go in a different direction and made the worst possible decisions we could have. And the neutral framing is that we saw all of the data and evidence and we decided to preserve the status quo.

In terms of which one is most likely, I worked on the book over a two year period. There have been some changes — the OECD has proposed some rules around artificial intelligence, the European Union has done the same, we are starting to see hints of regulation in the United states. Stanford University and Georgetown University have launched enormous new programmes that are dedicated to thinking about ethics and AI [Artificial Intelligence].

So the good news is that there is a lot of movement. The bad news is that there is no coordination. So outside of China everybody is going it alone. Canada is proposing its own AI ethics, India has proposed its own. So there is not a lot of coordination. At the same time however, China is highly coordinated, and is in the process of rolling out its own version and development track of AI all over the world.

So because of that, we are probably not going to see the optimistic framing happen — it is possible, but the only way the optimistic scenario happen is if we can all figure out a way to coordinate and collaborate and accept some short term losses for long term gains.

So instead what I think is a lot more likely is the pragmatic or catastrophic framings and unless we figure out a better way to work alongside China, i think the catastrophic framing is looking more and more plausible. I’m especially concerned because the US is antagonising China and we are not making any effort at all to develop a better relationship with it.

So I’m very concerned.

The current concerns about China’s quest to win the AI race seem reminiscent of cold-war concerns around the Soviet Union’s progress in the space race. If Capitalism is supposed to be a creative and efficient system than a command-economy, why do we believe that China’s state-driven push on AI will win out?

I think one problem is that for a very long time, most of the world has looked at China as a place where ideas are stolen, intellectual property is stolen, and China is simply copying and pasting.

That was certainly true for a very long time.

What has changed while we were not paying attention is that China has learnt how to innovate and they are innovating not just in electronics but in economics and diplomacy — which goes to the belt and road initiative, which is a sweeping policy that is intended to shift the global economy.

China has talked about it as a way to open global trade routes and to increase trade and development. China is building bridges and roads and accumulating foreign debt, but it is also laying fibre and 5G and laying digital infrastructures.

So we need to keep all of that in mind while we also pay attention to how AI will usher in automation in new ways.

This is why we need to think not just about what the future of our workforce looks like, but also what does an economy look like? What does a 21 century economy look like when so much of that economy is tethered to technologies that a lot of people don’t understand?

We’re not just making and selling widgets at scale, we are talking about automating decision making, automating processes, a lot of big professional services firms — their entire business modells are going to change in the wake of automation.

There are seismic economic shifts that are already starting to rattle our markets, our workforce, our economies and they are complex and they are multinational.

Staying with China, I was struck by how much the nation state figures in your thinking. For much of the 1990s and 2000s — the heyday of globalisation — the promise was that trans-national corporations and technology would slowly make the nation-state, as a category, less and less relevant.

It seems like the promise of new communications technology is that it will shrink the distance between us individuals, and will strengthen our communities and help us collaborate and cooperate and will reduce political tensions — we’ve heard all of these promises multiple times and that was certainly the promise of the internet.

It’s 30 years since Tim Bernes Lee proposed what became the internet. There is a disconnect however between our best laid plans on paper and how we function in everyday life.

You could look throughout history at democracy — American style democracy — and see that this is the anomaly. We are the anomaly. Dictatorships authoritarian rule, if you go really far back - that seems to be what humankind keeps gravitating back to . Democratic rule is much more difficult and market capitalism is a lot more difficult.

Those forms of governing, and that sort of economy, assumes we are going to act in the best possible way, in a collaborative way, and as it turns out we don’t always come by that naturally.

“from a practical standpoint, how on earth is a company like Google supposed to continue its business practices if they have a dozen different guidelines and guard rails and regulatory frameworks that they are supposed to work under?”

If you look at the situation with misinformation, and rumours and gossip and how we’ve abused social media, you can see that in play. In a way that gets exacerbated by AI, because AI systems are built by people and those people’s ideas and worldviews are encoded in the system which are then used for various purposes.

In the US they are used to make money in a different way, and elsewhere in the world they are used for supression anf to track people.

So I don’t know that AI or any technology really could ever get rid of nation states.

At a time when the motives of big tech are under suspicion, I was struck by how much faith you place in big tech companies, and their founders, to do what’s right for humanity.

It may seem a bit Pollyanna-ish. I’ve spent a lot of time with people who work in these companies and obviously researching them and reviewing their work. My observation is that the people working at these companies and certainly the people leading these companies do not only have profit as their singular motive.

In fact they are working extraordinarily hard at advancing technologies so they can do huge big important things on behalf of humanity. Getting us to full automation with cars is cool, but it also means we can reduce the death rate on the road, so we’re able to reduce that as a public health thing.

We’re able to increase efficiencies, maybe we can reduce reliance on pollutants. I don’t think anybody is putting all of this work into autonomous vehicles simply because it seems like a good business idea.

Likewise Apple, Google and Amazon all have divisions that are working on the frontiers of healthcare. As Americans we have pretty decent healthcare in our country, but it is not democratised. There’s plenty of people who do not have access, and healthcare equity is a serious problem here. And I think they are all trying to figure out not just how to cure cancer but to use data and to use technology to get better health services to more people.

Autonomous cars and health equity are not the core businesses of any of the big tech companies, but it is the core businesses of the tech companies that are supporting these bigger ideas.

Our various governments around the world aren’t working on these things, and certainly not in the United States. Our government has stripped away funding for basic research in science and technology and we have left to the private sector to figure it out for us. I really do not believe that the people running these tech companies are intentionally trying to harm anybody.

I think what is happening is that they have grand ambitions — and I’m grateful that they do because somebody has to work on our future. In order to achieve those grand ambitions they have to continue to appease their shareholders, and shareholders have put money into these companies and they want to make sure the companies are growing.

It is more like a systemic problem, than any one person making a poor choice.

That’s what I would say, at least about the G-MAFIA. [Google Microsoft Amazon Facebook IBM Apple].

I don’t know Jack Ma personally, but again it is my observation that the Chinese companies also want great things, and they have grand ambitions and they have great ideas on how to make our futures better.

The difference in China is that they are beholden to the government, and you can’t function in China, and you can’t be a Chinese company without living within the norms and standards and cultural practices and going along with what Beijing says.

We have to figure out a way to bring everyone to the table and work together. Regulation, I think, is not the way to do it.

My biggest concern that we are seeing calls for regulation and new types of regulation popping up all over the place. So from a practical standpoint, how on earth is a company like Google supposed to continue its business practices if they have a dozen different guidelines and guard rails and regulatory frameworks that they are supposed to work under?

It makes no sense.

But China is possibly the most regulated country right now, and they are supposed to win this AI race.

That’s regulation in a different way though. The regulation in China has more to do with playing into edicts that have been determined by the Chinese government for its longer term plans. The regulation in China is about pushing everyone forward in a particular direction. The regulations in the United States are punitive, I think.

One of the criticisms around letting tech companies solve problems is that they don’t always choose what is good for everyone. Take autonomous cars. We already have autonomous transportation — driverless subway trains. So rather than invest resources and R&D in autonomous cars for individuals, we should probably invest in mass transit. It is the same for healthcare — as a society we should look at public funded free healthcare.

I don’t know. Here’s my thought process.

If it’s the case that these companies are facing the kinds of regulation over the next two years that would cause them to slow down or change their business practices or fundamentally alter the work they are doing, I think they would want to avoid that.

The traditional way of avoiding that is to have lobbyists and other people either try to help write the regulations or get people to think otherwise.

But I think there is now enough momentum mounting from around the world that may not be possible. That’s why I think that if we can come up with a framework of economic incentives, and that might be a way to get these companies to come together and work together in a way that still preserves their ability to earn money, but also incentivises long term planning, transparency and how our data is being mined and refined — because the alternative at this point is fighting that regulation and these big tech companies don’t want that.

The basic thing is that I know what I’m suggesting sounds completely improbable, however, the kinds of regulation and frameworks and guardrails that all these companies are going to be facing the next couple of years, could be incredibly damaging

They have shown to us that they are not going to self police, that is why I think that offering them an alternative where everybody wins, maybe not as much as they would otherwise, but maybe not lose as much as they would if regulation comes into play. That’s why I think it’s possible.

On a personal note, is it depressing being a futurist in a time like this?

I would not say its depressing. I would say my job is much harder now than it used to be. That’s because there is quite a bit in flux. If my job is harder that means that people who make decisions in politics or government or companies, must work a lot more diligently on confronting deep uncertainties like AI.

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.