For the disability community, the future of AI is bleak

In December, the US Census proposed changes to the way disabilities are categorized. Had they been implemented, the changes would have reduced the number of Americans counted as disabled, at a time when experts say people with disabilities are already undercounted.

The Census opened its proposal to public comment; anyone can comment on a federal agency’s regulations for themselves. But in this particular case, the people most affected by the proposal faced more barriers to providing their input.

“It was really important to me to try to figure out how to best enable these people to write and submit comments,” said Matthew Cortland, a senior fellow at Data for Progress. With that in mind, they created a GPT-4 bot assistant for people who wanted to submit their own comments. Cortland has run comment campaigns in the past that focused on disability-related regulations, but this was the first with the help of AI.

“Thank you, this allowed me to produce the kind of commentary I always wanted to produce,” someone told them. “There’s too much brain fog for me to do this right now.”

Depending on who is counting, 12.6 percent or even 25 percent of the population has a disability. Disability itself is defined in numerous ways, but broadly includes physical, intellectual, and cognitive limitations, in addition to chronic diseases; a person with a physical disability may need to use a wheelchair, while a serious, energy-limiting illness such as long-term Covid can make it challenging for people to carry out the tasks of daily living.

AI – whether natural language processing, computer vision, or generative AI like GPT-4 – can have positive impacts on the disability community, but overall the future of AI and disability looks pretty bleak.

“The way AI is often treated and used is essentially phrenology with math,” says Joshua Earle, an assistant professor at the University of Virginia who links the history of eugenics to technology. People unfamiliar with disability hold negative views shaped by the media, pop culture, regulatory frameworks and the people around them, viewing disability as a deficit rather than a cultural identity. A system that devalues ​​disabled lives through customization and design will continue to repeat these mistakes in tech products.

“The way AI is often treated and used is essentially phrenology with math”

This attitude is starkly illustrated in the debates over healthcare rationing at the height of the Covid-19 pandemic. It also shows up in the form of quality-adjusted life years (QALYs), an AI-enabled “cost-effectiveness tool” used in healthcare to determine “quality of life” through external measures, rather than the intrinsic value of a person’s to live. For example, the inability to leave the house can be counted as a point against someone, as can a degenerative disease that limits physical activity or employability. A low score may lead to a certain medical intervention being rejected in cost-benefit analyses; why bother with expensive treatments for someone who will likely live a shortened life marred by disability?

The promise of AI is that automation will make work easier, but what exactly is being made easier? In 2023, a ProPublica investigation revealed that insurance giant Cigna used an internal algorithm that automatically flagged coverage claims, allowing doctors to sign mass denials, which disproportionately targeted people with disabilities with complex medical needs. The healthcare system is not the only arena in which algorithmic tools and AI can function against people with disabilities. It is an increasingly common phenomenon in employment, where applicant screening tools can introduce bias, much like the logic puzzles and games used by some recruiters, or the eye and expression tracking that accompanies some job interviews. More broadly, says Ashley Shew, an associate professor at Virginia Tech who specializes in disability and technology, “it fuels additional scrutiny of people with disabilities” through technologies that single them out.

Technologies like these often rest on two assumptions: that many people fake or exaggerate their disabilities, making fraud prevention critical, and that a life with a disability is not worth living. Therefore, decisions about resource allocation and social inclusion – whether it concerns home care, access to the workplace or the ability to reach people on social media – do not need to consider people with disabilities as equal to non-disabled people. That attitude is reflected in the artificial intelligence tools that society is building.

It doesn’t have to be this way.

Cortland’s creative use of GPT-4 to help people with disabilities participate in the political process is illustrative of how AI, in the right hands, can become a valuable accessibility tool. If you look in the right places, there are countless examples of this. For example, in early 2023, Midjourney released a feature that generate alt text for imagesthus increasing accessibility for the blind and partially sighted.

Amy Gaeta, an academic and poet who specializes in interactions between humans and technology, also sees potential for AI that can “take on very tedious tasks.” [disabled people] who are already overworked or extremely tired” and automate this by, for example, filling out forms or offering practice conversations for job interviews and social settings. The same technologies could be used for activities such as fighting insurance companies over wrongful denials.

“The people who are going to use it are likely to be the ones best suited to understand when it’s doing something wrong,” Earle notes in the context of technologies developed around or for, but not with, disabled people. For a truly bright future in AI, the technology community must embrace people with disabilities from the start as innovators, programmers, designers, makers, and, yes, users in their own right who can materially shape the technologies that mediate the world around them .

Leave a Reply

Your email address will not be published. Required fields are marked *