Who Will Shape the Brave New World?
Should we build artificial intelligence just because we can? The technological race has become a strategic battlefield in the geopolitical war over global hegemony, says Zuzanna Lewandowska, social entrepreneur, NGO executive and co-founder of initiatives around education and ethical leadership in Poland.
In 2004 The Facebook emerged from a Harvard dorm room. Before dropping “The” from the name and going global, Facebook was truly (whether Mark Zuckerberg likes to admit it or not) an iteration of Facemash – an app where campus students could rank their female peers based on their attractiveness. Could any one of us have predicted back then that, just a decade later or so, social media would become a key influence in shaping national elections, not to mention Russia interfering with all the big social media to manipulate the US election results and the Brexit referendum?
Technological progress speed-lightened the transformation of the world as we know it, from the world-wide-web a while ago, to metaverse today. Time will show what comes next. Superhumans? Maybe. This rapid revolution definitely raises questions at to who should be responsible for determining the path to our global progress– and whether we really had the chance to think about it. And progress is a game of geopolitics. Where you live determines if you have access to the Internet (40% of the world population are still offline– and it is not the Anglo-Saxon hemisphere), to cutting-edge technologies (the same, for now).
Will the growth of technology further add to the divide between the rich and the poor, or is there rather a chance that ‘singularity’ will bring us global prosperity and peace, as Carl Sagan would have loved it?
Should fundamental decisions about the future of our planet be in the hands of democratically elected governments, or are they increasingly dictated by business decisions made by large corporations? And if so, is it any good? Some public intellectuals, like Jeremy Riffkin (The Third Industrial Revolution) or Steven Pinker (Enlightenment Now) are known for having added a lot of optimism into the ‘bucket’ of possible scenarios for the future, by envisioning technology as a means to build a Brave New World. But that was all a while ago, that is, when nobody took seriously a grander scenario where the Western supremacy was questioned by anybody. Today, we have China rising to power, and Russia fiddling with global security after Yalta. Douglas Rushkoff writes in Survival of the Richest about how tech gurus are building luxurious bunkers to hide themselves in case of a doomsday scenario– to a large extent the effect of technologies they helped develop– and they are willing to pay big fees to be consulted on how best to do it.[i]
The relationship between governance, business and technology is a topic of today. And that’s a good sign. Leading think-tanks and conferences on government and innovation, such as Davos, DLD, and others hold panel discussions on the future of technology. No exception to this is Aspen Institute, which holds Socrates Seminars where invited CEOs and cross-industry leaders have an opportunity to discuss some fundamental questions about the future. A few weeks ago, I was invited by Aspen Institute CE to take part in such a Socrates format held near Prague, and this article briefly presents my main observations and reflections– being far from an industry expert– taken from this laborious and fascinating two-day discussion.
So, whether we like it or not, arguably, the future of humanity is today in the hands of a few private individuals.
They are outstanding scientists and engineers who are the authors of advancements made in the field of artificial intelligence, robotics, quantum physics, achievements in the field of biotechnology and other developments. At Harvard Medical School, Dr. David Sinclair, a biologist who is a professor of genetics, is pioneering aging research and confirms that aging can already be reversed in mice and to some extent in humans. He himself admits to using some of the drugs he had developed, he is 54 and looks 34. Sinclair’s revolutionary achievement might help us fight Alzheimer’s and other diseases, but also creates an enticing doorway to engineering humans towards immortality.
Robots Instead of Humans?
Advances in space exploration are increasingly in the hands of private visionaries, rather than states, and this is no longer a sci-fi scenario (like the story in the Apple TV series For All Mankind), in which a private company can help us go to Mars, courtesy of Elon Musk.
Slow but steady advancement is also being made in robotics. “We’re far from making human robots,” I hear from my friend Limor Schweitzer, who is the CEO of the robotic company RoboSavy. It would take about a decade to recreate the complexity of a human hand in a robot, and it would take much longer to recreate the large and fine motor skills of the entire human body. So, robots are not going to replace humans anytime soon. But basic robots are already replacing people in tasks such as simple warehouse logistics at mega-companies like Amazon and Zalando.
3 Daunting Ways #ArtificialIntelligence Will Transform The World Of Work. (Forbes) #FutureOfWork https://t.co/F4zrBv5wUd pic.twitter.com/qCuz4qi8fj
— James Gingerich, @Expeflow #WorkEasier #RPA (@jamesvgingerich) November 9, 2022
As most experts agree, sooner than later, autonomous cars will help eliminate deaths caused by driver errors. Eliminating this mistake [in two years in the US only] would save as many lives as the country lost in the entire Vietnam War.[ii] But this technology will also eliminate the millions of people who support their families as drivers. Certain changes seem inevitable, you may say. Are we giving enough attention, however, at the state level to dealing with the ramifications of technological advances by providing job transition programs to those most affected? Will governments keep the pace with these changes, or are big tech corporations in the driver’s seat– shall I say– for good? And to what extent should corporations assist nation-states in managing the social consequences of their industrial revolutions?
The Technological Race as a Strategic Battlefield
The big question, finally, is who will win the geopolitical AI race. “Whoever runs artificial intelligence in 2030 will rule the world until 2100,” notes Indermit Gill, senior analyst at Brookings in his article, in which he refers to Vladimir Putin’s statement that “the one who becomes a leader in this sphere will be the ruler of the world.”[iii]
The renaissance of AI, which we can observe in recent years in the increasingly broad usability of the technology, will– in the next few years to come– dramatically change the world as we know it. In a recent interview at TransformX conference, Eric Schmidt (Ex Google CEO) and Alexander Wang (CEO and founder of Scale AI), both stressed the sense of urgency in the AI industry. “It is important for people, companies but also governments to realize the speed in which this technology is moving and fundamentally embrace the fact that they have to reinvent themselves in a new paradigm or someone is going to make that new invention” (Wang).
As Shmidt observes, today the advancements in AI are made either by big tech companies, or by well-funded startups. This means that there are many large companies who will miss the bus, and there are many reasons why they do not apply or develop AI, like they have not been able to incorporate these new modalities, planforms, data models into their existing data flows. In a few years, these companies are likely to become obsolete.
And in many ways, some countries may as well. The technological race has become a strategic battlefield of the geopolitical war over global hegemony. We see a growing dependence of NATO on tech companies in maintaining an advantage over technologies owned by authoritarian regimes. In Ukraine, we see how access to advanced technology can increase the chances of a smaller nation to stand up against a super-state. Also unprecedented was how a private technology company (SpaceX) helped change the rules of the game during the war by donating 20,000 Starlink satellite units to Ukraine.
The US ban on advanced computer microchips in China has escalated the trade war between the world’s two most powerful economies, in an attempt to jeopardize China’s ability to power its AI technology. The whole world is holding its breath as we see unfolding some possible scenarios of escalating the global rivalry in the South Pacific. And if it comes to it, who will win?
“May you live in interesting times”– the popular saying goes (which quite appropriately is originally a Chinese curse)– and we are living in a global cognitive dissonance moving from decades of relative global peace and prosperity, into years, if not decades, of uncertainty and abrupt changes.
And because super-advanced technologies, whose short and long-term effects are simply unknown to us, are at the center of this global shift, they may be equally a blessing or a curse – they may help put the fire down, or on the contrary, they will become responsible for threatening the future of humanity. If today we (the Western world) are somewhat ‘ok’ with the fact that technology developed by the West is defining the way in which we grow as societies, then ask: would you be equally ok if highly advanced AI was in the hands of China? But this means that we should build AI not only because we can, but also because we have to. And we have to do it fast. The anthropocentric vision of one planet for all – one humanity – may be fading away for good as an utopian wish.
Indeed, the key to all of the above questions is whether it is possible to create global leadership in navigating technological transformation. In a fascinating book, The Future of our Freedom[iv], French economists Jean-Herve Lorenzi and Michael Berrebi ask directly: Who should be responsible for deciding which direction humanity will go? Because technology is neither good nor bad. It is ambivalent.
The answer to the timeless question in history: whether science and technology are synonymous with human liberation or enslavement, depends on us.
And these are some of the big questions that we discussed at the Aspen Seminar. If it was entirely up to us, how would we propose to lead the development of artificial intelligence and other super technologies? Should they be regulated only by markets, by public-private partnerships or by an especially appointed international organization? How would we mitigate some of the possible social costs of this transformation? Finally, should AI be the property of all mankind, and if not, where would this lead us?
A Human-Centric Approach Is Not Just Black and White
Moral issues lie at the heart of any historical change. The caveat is that there is no unifying morality that binds all of humanity. If our choice is to develop human-centered technology (as proposed by some of my fellow Aspen Seminar panelists and with whom I personally deeply agree), as part of Western culture, we have a general understanding of what values would become fundamental to this effort. But “human-centric” can mean many different things in different cultures.
When I was a child in the late years of communism in Poland, there was a popular joke: “In the Soviet Union, everything is done with man in mind. And that man has a mustache and lives in the Kremlin”. Decades after Stalin, we wouldn’t be especially surprised to hear that same joke about Putin. Today, Russia, despite Putin’s ambitions, is still relatively behind in the global AI race. But China is the first runner up after the US. In his recent efforts, its new president Xi Jinping leaves no doubt in the eyes of the international community that China is gearing up to dethrone the US as a global superpower. We can only speculate if the theatrical manner in which his predecessor, the 79-year-old Hu Jintao, was escorted out during the closing ceremony of the Communist Party Congress, was part of that grand PR scenario.
The way in which China approaches the subject of individual liberties, data protection, human privacy, and other subjects that present a completely different gravitas in China than in the West, show us how this culture could navigate AI and how it could use it.
The potential benefits of advancing super technologies like AI are unparalleled. They can push humanity into novel territories by increasing productivity and helping balance demographic challenges of the future, eliminating human error through automation, assisting in complex data-driven decision making, and many more. But it is our responsibility to ask questions about all the potential avenues this revolution can take us, including its harmful flip-sides.
And if it is not us, individual members of society, who will make ultimate decisions about the final result of this fast-pace change, then at least we should be proactive in how we can remain part of the conversation. In spite of its intense development over the recent years, technology is still a means to provide us with answers. For now, at least, it does not replace us in asking ethical questions. This is our task. One quote by Socrates comes to mind: “The secret of change is to focus all of your energy not on fighting the old, but on building the new”. One way to do it, as Socrates teaches, is by asking the right questions.
This article was published thanks to author’s participation in Socrates Seminars held by The Aspen Institute CE.
[i] Douglas Rushkoff “The super-rich ‘preppers’ planning to save themselves from the apocalypse”. The Guardian, September 4, 2022. https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff
[ii] Peter Hancock, “Are Autonomous Cars Really Safer Than Human Drivers?” Scientific American, https://www.scientificamerican.com/article/are-autonomous-cars-really-safer-than-human-drivers/
[iii] Indermit Gill, “Whoever leads in artificial intelligence in 2030 will rule the world until 2100”. Brookings, January 17, 2020. https://www.brookings.edu/blog/future-development/2020/01/17/whoever-leads-in-artificial-intelligence-in-2030-will-rule-the-world-until-2100/
[iv] Lorenzi Jean-Herve, Berrebi Michael; “L’Avenir de notre Liberte (2017)”, Polish translation: Przyszłość naszej wolności. PIW. 2019.
Share this on social media
Support Aspen Institute
The support of our corporate partners, individual members and donors is critical to sustaining our work. We encourage you to join us at our roundtable discussions, forums, symposia, and special event dinners.