Opinion
Emerging Technologies

Generative AI holds great potential for those with disabilities - but it needs policy to shape it

A man being fitted with a bionic arm, illustrating the potential of Generative AI

Managed well, Generative AI holds great potential for those with disabilities Image: Photo by ThisisEngineering RAEng on Unsplash

Yonah Welker
Explorer and Board Member - EU Commission projects, Yonah.org
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Society and Equity

This article is part of: AI Governance Summit
  • Generative AI can support people with disabilities by fueling existing assistive technology and robotics, learning, accommodation and accessibility solutions.
  • Generative AI also poses risks for those with disabilities.
  • Such risks are associated with transparency, understanding systems outcomes, cognitive silos, potential misinformation and manipulation, privacy and ownership.

Generative AI-based systems can support people with disabilities by fueling existing assistive technology ecosystems and robotics, learning, accommodation and accessibility solutions. Ultimately, Generative AI can empower broader health and assistive solutions. However, Generative AI also poses unique risks associated with transparency, understanding systems outcomes, cognitive silos, potential misinformation and manipulation, privacy and ownership. These rights and categories are underlined by AI-specific laws, such as the European AI Act, and digital frameworks, such as the EU Digital Market and Services Act, Accessibility Act and the Convention on the Rights of Persons with Disabilities.

Since the introduction of the EU AI Act and mirrored frameworks, policies and acts worldwide aim to categorize AI systems based on risks and related compliance frameworks. Generative AI systems became the most discussable category, not being related to the high-risk category, but still posing increased risks and breaches, especially for groups with disabilities and impairments.

Since then, governments and multilateral agencies, such as the Organisation for Economic Co-Operation and Development (OECD), the WHO and Unesco, with its Generative AI in education and research, worked on dedicated reports and sets of recommendations and guidelines to reflect the use of Generative AI in specific areas, such as education, healthcare, the workplace, related professional and user literacy and capacities to further improve the process of adoption.

Here, I consider how these risks can be further mitigated to protect groups with disabilities, cognitive, sensory and physical impairments, and which policy considerations should be made.

How Generative AI may support disabilities

AI algorithms and systems play a significant role in supporting and accommodating disabilities from augmenting assistive technologies and robotics to creating personalized learning and healthcare solutions. Generative AI and language-based models further expand this impact and the R&D behind it. In particular, such systems may fuel existing assistive ecosystems, health, work, learning and accommodation solutions, requiring communication and interaction with the patient or student, social and emotional intelligence and feedback. Such solutions are frequently used in areas involving cognitive impairments, mental health, autism, dyslexia, attention deficit disorder and emotion recognition impairment, which largely rely on language models and interaction.

With the growing importance of web and workplace accessibility (including the dedicated European Accessibility Act), Generative AI-based approaches can be used to create digital accessibility solutions, associated with speech-to-text or image-to-speech conversion. It may also fuel accessible design and interfaces involving adaptive texts, fonts and colours benefiting reading, visual or cognitive impairments. Similar algorithms can be used to create libraries, knowledge and education platforms that may serve the purpose of assistive accommodation, social protection and micro-learning, equality training and policing.

Finally, approaches explored through building such accessible and assistive ecosystems may help to fuel the assistive pretext - when technologies created for groups with disabilities can be later adapted for a broader population, including 'neurofuturism' - fueling new forms of interaction, learning and creativity, involving biofeedback, languages and different forms of media.

Generative AI and ethics

Despite the positive sides of AI systems serving accessibility and assistive technologies, disability is presented by a spectrum of parameters, involved conditions, stakeholders, interfaces and technologies, making it a complex task to properly serve.

In particular, a person with a disability may lack limbs and have a different body shape and posture. A blind person may not properly understand visual cues or signals. Individuals with hearing impairments may not hear and comply with audible commands or warnings. Individuals with cognitive disabilities may communicate differently, lack emotional recognition and have different speech or psychomotor patterns. AI algorithms are known to discriminate against individuals with facial differences or asymmetry, different gestures, gesticulation, speech impairment, different communication styles or assistive devices. These concerns were raised by the United Nations Special Rapporteur on the rights of persons with disabilities, the European Disability Forum and a variety of other organizations and institutions.

When compared to existing AI systems, however, language-based platforms require even more attention and ethical guidance. In particular, they can imitate human behaviour and interaction, involve more autonomy and pose challenges in delegating decision-making. They also rely on significant volumes of data, a combination of machine-learning techniques and the social and technical literacy behind it.

Discover

What is the World Economic Forum doing to close the disability inclusion gap?

How Generative AI may pose risks for disabilities

There are different ways, in which generative AI-associated systems may pose risks for individuals with disabilities. In particular:

• They may fuel bias in existing systems, such as automated screening and interviews, public services involving different types of physical and digital recognition and contextual and sentiment bias.

• They may lead to manipulative scenarios, cognitive silos and echo chambers. For instance, algorithms were used to spread misinformation among patients during the COVID-19 pandemic.

Language-based systems may add a negative connotation to disability-related keywords and phrases or provide wrong outcomes due to a public data set containing statistical distortions or wrong entries.

• Privacy - in some countries, governmental agencies were accused of using data from social media without consent to confirm patients’ disability status for pension programmes.

Have you read?

Why does it happen - social, technical and policy challenges

It’s important to understand that these challenges are driven by social and technical factors related to data, models, systems and social practices.

In particular, significant volumes of data sets may contain historical biases and distortions and lack particular parameters and conditions. Different types of machine-learning techniques can bring different distortions. In the case of supervised learning, people labelling data may introduce their subjectiveness or errors and underlook particular patterns, cases or contexts. Unsupervised learning is typically connected to a statistical lack of input and representation. Reinforcement learning can be associated with environmental limitations or problems of initial experience.

The complexity of the research and development in this field is associated with:

• Spectrums and parameters - people with disabilities may have additional conditions and impairments (comorbidities) that do not exist in data sets. For instance, some systems used ear shape or the presence of an ear canal to determine whether or not an image included a human face. However, this system could not work with patients suffering from craniofacial syndromes (facial impairments).

• Gender and intersectionality - infrastructure and urban datasets used for city planning are known to be gender-blind. For cognitive disabilities, girls are frequently misdiagnosed due to different manifesting criteria and are historically limited in research statistics affecting data labels.

• Lack of data access - particular social groups (eg. caucasian families in the US) are more likely to report concerns related to the child’s autism due to better medical access. At the same time, in some countries, immigrants tend to avoid medical examinations and tests for fear of being deported or facing unacceptable medical costs.

• Lack of research and/or sufficient evidence - it’s known that some medical solutions addressing breathing or heart issues were developed after short-term observations. After a longer period of tests, it was found that the system was not so efficient and brought negative side effects, but it was already in production.

• Lack of participation - legal and judicial systems are known to be trained on publicly available data sets beyond their specific jurisdiction, frequently overlooking the participation of particular groups and populations.

• Proxies and generalization - medical assessment systems are known to be created based on 'normalized' demographic and health groups. It may predominantly exclude some severe cognitive and mental impairments among the young population, attributing it only to older groups. Medical and social services are also known to use profiling - grouping people on their interests instead of personal traits - which may lead to discriminatory outcomes.

• Subjectiveness - assessment platforms are known to be trained using standards of 'normality' that were found to bring subjectiveness to groups with disabilities assessing such characteristics as employability or professionalism and intellectual capabilities.

The way forward

Upcoming national AI strategies (eg. Germany, the UK, the US and China) underline the complexity of Generative AI systems. They bring focus to national safety and access to data to avoid silos, ensure fair technology competition and practices, improve literacy and capacity and introduce privacy and ethics standards. In addition, multilateral agencies, such as the WHO, Unesco and the OECD work on area-specific guidelines, addressing healthcare, education, literacy and capacity-oriented recommendations (eg. Unesco - AI competence framework for students and teachers, 2023).

However, research and development of disability-centred generative AI systems is still a complex task from a technology and policy perspective. It includes its intersectional nature, spectrums, comorbidities, gender and age-specific parameters, modular and multistakeholder scenarios - when families, caregivers, several devices and interfaces can be involved simultaneously – and the necessity of condition-specific adoption and literacy guidelines across segments, sectors and cases.

Along with the general criteria, it also requires specific systems categories, risks and compliance for particular groups, ages and spectrums, the necessity to improve the transparency of machine-learning techniques and approaches to outcomes assessments, access and diversity of public data sets and historical statistics, setting boundaries and rules for data collection, privacy and consent, identifying accountable actions and non-actions and finally understanding scenarios of misuse. Thus, increasing the role of non-AI-specific laws, such as digital frameworks - the EU Digital Market and Services Acts, Accessibility Acts and the Convention on Rights of Persons with Disabilities.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum