The SPRITEs tool uses keys to help blind and low-vision people navigate web pages by creating regions of keys that make it easier to interact with a web site. Image courtesy of University of Washington
April 18 (UPI) -- Low vision and blind people successfully navigated web pages using a traditional screen reader, keyboard and new technology, according to researchers.
Engineers at the University of Washington and Carnegie Mellon University have developed a way for them to access tables, maps and nested lists.
The engineers plan to present their work on April 25 at Association for Computing Machinery's CHI 2018 conference in Montreal.
"We're not trying to replace screen readers, or the things that they do really well," Dr. Jennifer Mankoff, a professor in UW's School of Computer Science, said in a press release. "But tables are one place that it's possible to do better. This study demonstrates that we can use the keyboard to bring tangible, structured information back, and the benefits are enormous."
The new tool, Spatial Recognition Interaction Techniques, or SPRITEs, maps the keyboard to different areas or functions on the screen.
In the system, users press keys to prompt the screen reader to move to certain parts of the website.
Number keys, along the top of the keyboard, can map menu buttons. Double-clicking a number opens that menu item's submenu, and then the user selects each item in the submenu from the top row of letters. To access tables and maps, the keys on the outside edge of the keyboard act as coordinates that let the user navigate to different areas.
For example, tapping a number key might open an icon for each menu option. Then tapping the letter "u" could read out the entry with more specific information.
"Rather than having to browse linearly through all the options, our tool lets people learn the structure of the site and then go right there," Mankoff said. "You can learn which part of the keyboard you need to jump right down and check, say, whether dogs are allowed."
The researchers recruited 10 people for the study, eight of whom were blind and two that had low vision. The participants were then asked to complete tasks using their preferred screen reader technology and use that technology with SPRITEs.
Three times as many participants were able to complete spatial web-browsing tasks, researchers report. For simple text-based tasks, such as finding a given section header, counting headings in a page or finding a specific word, participants completed the test successfully using either tool.
Significantly, most of the test participants who couldn't find an item in a submenu or specific information in a table using their favorite screen reader were successful using SPRITEs.
"A lot more people were able to understand the structure of the web page if we gave them a tactile feedback," said Rushil Khurana, a doctoral student at Carnegie Mellon University who conducted the tests in Pittsburgh. "We're not trying to replace the screen reader, we're trying to work in conjunction with it."
The researchers are working to improve the system and are adding WebAnywhere, a free online screen reader developed at UW. SPRITEs would let users navigate with their keyboard, while using the WebAnywhere plugin to read information displayed on a web page.
The team is also developing the technology on mobile devices.
"We hope to deploy something that will make a difference in people's lives," Mankoff said.