- Supercomputers are ultrafast processors used to manage and interpret vast quantities of data.
- Artificial Intelligence (AI) relies on and benefits from the global race for faster supercomputers.
- Ultra-fast data processing brings with it far-reaching and fundamental ethical questions.
Since the beginning of this year, there has been a lot of hype, skepticism, cynicism, and confusion surrounding the concept of the metaverse.
For some, it has added to the confusion of an already elusive world of augmented reality and mixed reality. But for the well-initiated, the metaverse is a landmark moment in the extended reality world; a world approaching the ‘second life’ that many have long predicted.
News that some of the world’s top tech firms are rapidly developing AI supercomputers has further fueled that anticipation.
But what will the entry of supercomputers mean for the metaverse and virtual reality — and how can we manage it responsibly?
What is a supercomputer?
Simply put, a supercomputer is a computer with a very high level of performance. That performance, which far outclasses any consumer laptop or desktop PC available on the shelves, can, among other things, be used to process vast quantities of data and draw key insights from it. These computers are massive parallel arrangements of computers — or processing units — which can perform the most complex computing operations.
Whenever you hear about supercomputers, you’re likely to hear the term FLOPS — “floating point operations per second.” FLOPS is a key measure of performance for these top-end processors.
Floating numbers, in essence, are those with decimal points, including very long ones. These decimal numbers are key when processing large quantities of data or carrying out complex operations on a computer, and this is where FLOPS comes in as a measurement. It tells us how a computer will perform when managing these complicated calculations.
The supercomputer market
The supercomputer market is expected to grow at a compound annual growth rate of about 9.5% from 2021 to 2026. Increasing adoption of cloud computing and cloud technologies will fuel this growth, as will the need for systems that can ingest larger datasets to train and operate AI.
The industry has been booming in recent years, with landmark achievements helping to build public interest, and companies all over the world are now striving to outcompete and outpace the competition on their own supercomputer projects.
In 2008, IBM’s Roadrunner was the first to break the one petaflop barrier — meaning it could process one quadrillion operations per second. According to one study, the Fugaku supercomputer, based in the RIKEN Centre for Computational Science in Kobe, Japan, is the world’s fastest machine. It is capable of processing 442 petaflops per second.
Have you read?
Meta’s AI supercomputer
In late January, Meta announced on social media that it would be developing an AI supercomputer. If Meta’s prediction is true it will one day be the world’s fastest supercomputer.
Its sole purpose? Running the next generation of AI algorithms.
The first phase of its creation is already complete, and by the end of 2022 the second phase is expected to be finished. At that point, Meta’s supercomputer will contain some 16,000 total GPUs, and the company has promised that it will be able to train AI systems with more than a trillion parameters on data sets as large as an exabyte — or one thousand petabytes.
While these numbers are impressive, what does this mean for the future of AI?
Metaverse and supercomputers: the world ahead
Meta has promised a host of revolutionary uses of its supercomputer, from ultrafast gaming to instant and seamless translation of mind-bendingly large quantities of text, images and videos at once — think about a group of people simultaneously speaking different languages, and being able to communicate seamlessly. It could also be used to scan huge quantities of images or videos for harmful content, or identify one face within a huge crowd of people.
The computer will also be key in developing next-generation AI models, it will power the Metaverse, and it will be a foundation upon which future metaverse technologies can rely.
But the implications of all this power mean that there are serious ethical considerations for the use of Meta’s supercomputer, and for supercomputers more generally.
How is the World Economic Forum ensuring the ethical development of artificial intelligence?
The World Economic Forum's Platform for Shaping the Future of Artificial Intelligence and Machine Learning brings together global stakeholders to accelerate the adoption of transparent and inclusive AI, so the technology can be deployed in a safe, ethical and responsible way.
- The Forum created a toolkit for human resources to promote positive and ethical human-centred use of AI for organizations, workers and society.
- From robotic toys and social media to the classroom and home, AI is part of life. By developing AI standards for children, the Forum is creating actionable guidelines to educate, empower and protect children and youth.
- The Forum is bringing together over 100 companies, governments, civil society organizations and academic institutions in the Global AI Action Alliance to accelerate the adoption of responsible AI in the global public interest.
- The Forum’s Empowering AI Leadership: AI C-Suite Toolkit provides practical tools to help companies better understand the ethical and business impact of their AI investment. The Model AI Governance Framework features responsible practices of leading companies from different sectors that organizations can adopt in a similar manner.
- In partnership with the UK government, the Forum created a set of procurement recommendations designed to unlock public-sector adoption of responsible AI.
- The Centre for the Fourth Industrial Revolution Rwanda is promoting the adoption of new technologies in the country, driving innovation on data policy and AI – particularly in healthcare.
Contact us for more information on how to get involved.
Ethics and AI
New technologies have always demanded societal conversations about how they should be used — and how they should not. Supercomputers are no different in this regard.
While AI has been brilliant at solving some large and complex problems in the world, there still remain some flaws. These flaws are not caused by the AI algorithms — instead, they are a direct result of the data that is fed into the AI systems.
If the data fed into systems has a bias, then the result of an AI calculation is bound to carry that bias — and, if the metaverse and virtual reality do become a ‘second life,’ then are we bound to carry with us the flaws, prejudices and biases of the first life?
The age of AI also brings with it key questions about human privacy and the privacy of our thoughts.
To address these concerns, we must seriously examine our interaction with AI. When we look at the ethical structures of AI, we must ensure its usage is transparent, explainable, bias-free, and accountable.
We must be able to explain why a certain calculation or process was initiated in the first place, what exactly happened when the AI ran it, make sure there was no initial human bias against any group or idea, and be clear about who should be held accountable for the results of a calculation.
It remains to be seen whether these supercomputers and the companies producing them will ensure that these four key areas are consistently and transparently addressed. But it will become all the more pressing as they continue to wield more power and influence over our lives — both online and in the real world.
The surge in the supercomputing era will push the era of parallel computing and use cases at the speed of thought. We see a future where a combination of supercomputers and intelligent software will run on a hybrid cloud, feeding partial workflows of computation to a quantum computer, a form of computing that experts believe has the capacity to exceed even that of the fastest supercomputers.
What remains to be seen is how this era will fuel the next generation of metaverse experiences.