If you had a symptom of cancer, what kind of doctor would you look for?

Would you look for a doctor with a high IQ who can diagnose your condition with great accuracy but has an arrogant and demeaning attitude; or a high EQ (emotionally intelligent) doctor who treats you with care and compassion but makes you feel less confident about the diagnosis?

Many people would probably choose the doctor with a high IQ regardless of their bedside manner but what if all doctors had an artificial intelligence (AI) driven diagnostic machine that can give a highly accurate diagnosis of patients? Many people would then likely choose doctors with a high EQ, doctors who would be empathetic to your situation, compassionately communicate with you and your family and treat you with warmth and care.

Yet, you would still want a wise doctor that does not blindly follow an AI-based diagnosis. You would hope that the doctor balances AI’s diagnostic capabilities with good critical reasoning and deep understanding of the strengths and limitations of AI. The doctor should be able to contextualize your circumstances and situation beyond what AI captures in its algorithms, such as your family situation and religious beliefs, demonstrating empathy in not only diagnosis and treatment, but also in how these services are delivered to you.

As such, individuals need to embrace a new form of human intelligence beyond IQ and EQ to be successful in the AI age – digital intelligence (DQ) – that enables individuals to effectively utilize technology for the benefits of themselves, others and society as a whole. If a person with a high IQ is described as smart and a person with a high EQ as empathetic, then a person with a high DQ might be described as wise.

Intelligence has been humanity's existential reason for being on Earth, as, so far, we have been the only intelligent masters on the planet. With the fast evolution of AI, which will soon have more superior “intelligence” than humans, we must ask ourselves this fundamental question from a new perspective: what measures will we take to keep humans as masters in the AI age?

With the tangible threat of AI-based weapons, the immediate response has been embedding AI ethics – ethical principles that ensure zero harm to humans – in all AI machines. Proponents of human rights argue about the need for an ethical framework to ensure that AI does not cause harm to people and society, but this isn’t enough. AI is everywhere, from the smartphones in our pockets to virtual assistance media devices in our living rooms and in our work email. Our data is being captured and fed back to AI machinery every second, everywhere. The most pressing matter is, therefore, that every individual becomes an ethical digital citizen. In fact, ethical and moral principles are at the very core of what makes a human, human.

Thus, digital DNA – the core building blocks of digital intelligence – is centred around the golden rule of “treat others as you want to be treated”. It has eight ethical components covering all dimensions of our digital life and centred on respect for self, time and environment, life, property, families and others, reputation and relationships, knowledge and human dignity.

What is the World Economic Forum doing about the Fourth Industrial Revolution?

The World Economic Forum was the first to draw the world’s attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will help—not harm—humanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, internet of things (IoT), precision medicine and environmental innovations.

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

Ironically, this more than 2,000-year-old wisdom applies to the AI age, not in religious and moral contexts so much as in the practical competencies needed for daily life and work. It translates into learnable and practical competencies of DQ, from online safety and AI literacy to the job readiness that individuals need to be ready for life and work in the AI age.

Let’s dissect the human decision-making process based on the following five steps:

1) gathering the data we have (information gathering and synthesis);

2) developing information that we do not have (prediction);

3) judging based on prediction (judgement);

4) making decisions based on judgement calls (decision);

and 5) acting based on the chosen decision (action).

In the AI age, machines effectively cover the first two steps: information gathering and synthesis, and prediction. A human’s digital intelligence should cover the remaining three steps, relating to the decision-making process that is rooted in digital DNA. We make decisions by evaluating the trade-off between current circumstances and potential consequences on top of prediction results and ethical principles.

Going back to the doctor’s example, here are eight DQ competencies rooted in digital DNA that are required for cultivating the best doctors in the AI age:

  • Digital identity (respect for oneself): Have self-efficacy as a digital doctor who can utilize AI in the best interest of patients;
  • Digital literacy (respect for knowledge): Understand AI technology and know how to best utilize its knowledge-generating and predictive functions as part of decision-making processes;
  • Digital security (respect for property): Know how to handle cyber-security issues related to digital medical systems and patient data;
  • Digital use (respect for time and environment): Use AI as a complementary tool in a balanced way by understanding the strengths and limitations of AI;
  • Digital safety (respect for life): Know the potential risks associated with technology and how to mitigate them;
  • Digital emotional intelligence (respect for families and others): Choose a treatment method that takes into account a patient's situation, financial status, emotions and condition, with empathy;
  • Digital communication (respect for reputation and relationships): Be aware that anything communicated about a patient online or offline can become part of a digital footprint and feedback “data” that can damage the privacy and reputation of a doctor and patients;
  • Digital right (respect for human dignity): Understand a patient’s rights to personal data and privacy.

Ethical principles are no longer just a moral compass for individuals; when it comes to our digital DNA, they will become our way of doing business and living. This DNA will empower humans to regain the driver’s seat in the age of AI and be the master of technology, rather than its slave.