Enabled Logo: Enhanced Network Accesibility for the Blind And Visually Impaired


KEY SECTIONS:

AWC
Accessible Web Contents

MAI
Multimodal Adaptive Interfaces

WIN
Wireless Networking

MOC
Mobile Computing

 

OTHER SECTIONS:

 

CURRENT SECTION:

MAI - Multimodal Adaptive Interfaces

Top Page

To address accessibility issues, interfaces between the information source and the end user have to be accessible as well. Usually, computer interfaces that blind people use consists of keyboards and screen readers. Some blind people who have learned Braille can use refreshable Braille display. However neither of these devices is suitable to present graphical information on the Web.

Moreover, to use a screen reader software, blind people have to go through a training period in order to be familiar with the command keystrokes and get used to the synthesized voice. Voice control and speech input has been attempted to improve the interaction between the user and the computer.

However, these types of system rely on the length of the training and the accuracy of the speech recognition software. Therefore in order to allow blind people to access information and input commands, interfaces that can provide multimodal interaction will be ideal. Different input and output methods based on speech, non-speech sound and haptic representations will be developed.

Blind people are individuals that have different level of visual impairments. Some of them are born blind, some are blind in the later stages of their life due to illness or old age, whereas some still have residual visions. Therefore there needs to be a range of different ways in which information is presented to them. Interfaces that will be able to recognize the needs of the users and generate appropriate information representation will be developed.

Appropriate intelligence will be required to provide this kind of flexibility based on the user profile, preference and location. Moreover, users may use different types of input and output devices to access information, it is important to ensure the interoperability of data so that users can receive the same information despite the sensory modality or device that is used.

The specific objectives in this area include:

  • Developing interfaces that convey information through multiple sensory modalities.

  • Building a context-aware system which will present information to users in the appropriate form according to users' profile, location, available resources, etc.

  • Developing a scalable and interoperable architecture and mechanism that allows information to be presented on different devices/platforms.


Copyright © 2004-2007 ENABLED | [new window] EU Disclaimer | Website contact | Information