Can AI help boost accessibility? These researchers tested it for themselves

source : www.washington.edu
Technology | News items | Research | Technology
November 2, 2023
Seven researchers from the University of Washington tested the usefulness of AI tools for accessibility. While researchers found cases where the tools were helpful, they also found significant problems. AI-generated images like these helped a researcher with aphantasia (an inability to visualize) interpret images from books and visualize concept sketches of crafts, while other images perpetuated ability biases.University of Washington/Midjourney
Generative artificial intelligence tools such as ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially help people with various disabilities. These tools can summarize content, compose messages, or describe images. Yet the extent of this potential is an open question because, in addition to regularly expressing inaccuracies and failing basic reasoning, these tools can also perpetuate biases.
This year, seven researchers from the University of Washington conducted a three-month autoethnographic study – based on their own experiences as people with and without disabilities – to test the usability of AI tools for accessibility. While researchers found cases where the tools were useful, in most cases they also found significant problems with AI tools, whether generating images, writing Slack messages, summarizing texts, or trying to improve accessibility improve documents.
The team presented its findings on October 22 at the ASSETS 2023 conference in New York.
“When technology changes rapidly, there is always a risk that people with disabilities will be left behind,” said senior author Jennifer Mankoff, UW professor in the Paul G. Allen School of Computer Science & Engineering. “I believe very strongly in the value of first-person accounts in helping us understand things. Because our group had a large number of people who could experience AI as people with disabilities and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about it.”
The group presented its research in seven vignettes, often combining experiences into separate accounts to maintain anonymity. For example, in the first account, “Mia,” who occasionally suffers from brain fog, has turned to ChatPDF.com, which summarizes PDFs, to help with the work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the instrument was both inaccurate and inadequate, changing the argument of an article to the point where it sounded like researchers should be talking to health care providers instead of chronically ill people. “Mia” was able to spot this because the researcher knew the paper well, but Mankoff said such subtle errors are among the “most insidious” problems when using AI because they can easily go unnoticed.
But in the same vignette, “Mia” used chatbots to create and format references for an article they were working on while suffering from brain fog. The AI models still made mistakes, but the technology proved useful in this case.
Mankoff, who has spoken publicly about having Lyme disease, contributed to this report. “Using AI for this task still required work, but it reduced cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I faced,” said Mankoff.
The results of the other tests researchers selected were equally mixed:
- One author, who is autistic, found that AI helped write Slack messages at work without spending too much time thinking about the wording. Peers found the messages “robotic,” but the tool still made the author feel more confident in these interactions.
- Three authors tried to use AI tools to increase the accessibility of content, such as tables for a research paper or a slideshow for a class. The AI programs could establish accessibility rules, but could not apply them consistently when creating content.
- Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret images from books. But when they used the AI tool to create an illustration of “people with different disabilities looking happy, but not at a party,” the program could only conjure up charged images of people at a party with ability incongruence, like a disembodied hand. resting on a disembodied prosthetic leg.
“I was amazed at how dramatically the results and outcomes varied depending on the task,” said lead author Kate Glazko, a UW doctoral student at the Allen School. “In some cases, such as taking a photo of people with disabilities looking happy, even with specific cues – can you make it this way? – the results did not achieve what the authors wanted.”
The researchers note that more work is needed to develop solutions to the problems identified by the study. A particularly complex problem concerns developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave to “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He couldn’t verify the result himself, but a colleague said it “made no sense at all.” The frequency of AI-induced errors, Mankoff says, “makes research into accessible validation particularly important.”
Mankoff also plans to explore ways to document the types of ability and inaccessibility present in AI-generated content, as well as explore issues in other areas, such as AI-written code.
“Every time software engineering practices change, there is a risk that apps and websites will become less accessible if good defaults aren’t in place,” Glazko said. “For example, if AI-generated code were accessible by default, it could help developers learn about and improve the accessibility of their apps and websites.”
Co-authors of this paper are Momona Yamagami, who completed this research as a UW postdoctoral fellow at the Allen School and is now at Rice University; Aashaka Desai, Kelly Avery Mack and Venkatesh Potluri, all UW doctoral students at the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and now works at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, a NIDILRR ARRT grant, and the National Science Foundation.
For more information contact Glazko op [email protected] and Mankoff op [email protected].
Tag(s): Center for Research and Education on Accessible Technology and Experiences • Jennifer Mankoff • Kate Glazko • Paul G. Allen School of Computer Science & Engineering
source : www.washington.edu