In 1996, as a graduate student at the Logic Group at Stanford, I wrote my PhD thesis on artificial intelligence (AI) titled Integrating Specialized Procedures into Proof Systems. I have remained a student of AI most of my adult life, deeply influenced by the all-time great thinkers – McCarthy, Minsky, Kay, Sutherland, the list goes on. For many of us that have deeply studied this topic for many years, the applications of AI we see today barely begin to scratch the surface of the possible. Even as the applications are becoming more sophisticated — from fraud detection to autonomous vehicles, knowledge management to natural language processing – we are still very far from Marvin Minsky’s “society of mind”.
Infosys’ recent research shows that AI technologies are being deployed or planned for deployment by most large companies. Businesses expect these technologies to bring disruption and, with that, growth and opportunity — for themselves, their employees, their customers and other stakeholders. Yet for many, the potential for disruptive change also brings fear and uncertainty. Coupled with it is the uncertainty introduced by tumultuous geopolitical events: Brexit, the US presidential election, demonetization in India, cybersecurity, the refugee crisis, global terrorism and more.
In this uncertain environment, it comes as no surprise that employees worry about the future of their jobs and about their privacy. These concerns remain despite attempts by employers to address them (80% of companies we surveyed plan to retrain and redeploy affected employees, and many say they carefully consider ethics and privacy/data protection as part of their AI efforts).
Like each technological disruption before it, the disruption caused by AI will also move at a speed and engage on a scale previously unknown in human history. And like we have done a multitude of times previously, humans must also evolve our faculties and tools, and move alongside it. We must achieve a kind of symbiosis between minds and machines, with machines amplifying and actualizing thoughts and ideas from the human brain, and freeing it from mundane and repetitive cognitive tasks. This unleashed brain can then do the kinds of things no AI will ever do – like being able to see what is not there and imagine what that can be.
Have you read?
People have the right to be concerned about the irresponsible use of technology and leaders have the opportunity, and indeed the imperative, to assuage those fears through empathy, action, education and communication. During the World Economic Forum’s Global Future Council on AI & Robotics held recently in the United Arab Emirates, my peers in the AI community and I identified four fundamental areas where leaders must act now to shape an inclusive and safe future:
- Education: We must reorient our education system to scalable programmes of personalized learning. Every individual should have basic digital literacy to dispel fear around technologies such as AI and enable everyone to build and use these systems. This learning must continue throughout our lives — well beyond traditional classrooms — enabled by workplaces and institutions.
- Employment: Education will enable a more adaptive workforce and hone the skills necessary for future professions involving creativity, flexibility, agility, entrepreneurship and more. We must think holistically and focus on creating these new jobs, as well as democratizing the skills needed to perform them, as part of every AI project.
- Healthcare: AI will allow for a dramatic transformation in healthcare, by amplifying the capabilities of both mind and body. AI will also allow us to consider problems on a global scale and give us the tools to make advances in some of the most vexing problems in health and wellness.
- Ethics: Often, ethics and values fall by the wayside during the rush to capitalize on new capabilities. We must pay attention to how disruptive changes will severely affect real lives and real people, and work to preserve human dignity and integrity throughout the changes. Some of this will come naturally, but other aspects — such as a moral code for the engineers who will create tomorrow’s AI systems — must be purposefully considered and articulated.
The road to 2020 will be significant not only in the development of technologies around AI, but also because of the strategies that will govern our interaction with it for years to come. As leaders, we have a responsibility to reimagine education, employment and social frameworks, and work diligently to bring everyone along with us into the new reality. The opportunity to transcend the boundaries of our imagination with technologies such as AI is almost limitless, and the fear around this is natural. But it would be a great travesty to allow forces of fear and negativity to overwhelm the great potential for the purposeful and humane advancement of the human race that we have before us.