Modality (human–computer interaction)

In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory),[1] or other significant differences in processing (e.g., text vs. image).[2] A system is designated unimodal if it has only one modality implemented, and multimodal if it has more than one.[1] When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively.[3] Modalities can be generally defined in two forms: human-computer and computer-human modalities.

Computer–Human modalities

Computers utilize a wide range of technologies to communicate and send information to humans:

Any human sense can be used as a computer to human modality. However, the modalities of seeing and hearing are the most commonly employed since they are capable of transmitting information at a higher speed than other modalities, 250 to 300[4] and 150 to 160[5] words per minute, respectively. Though not commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm[6] through the use of a refreshable Braille display. Other more common forms of tactition are smartphone and game controller vibrations.

Human–computer modalities

Computers can be equipped with various types of input devices and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and afford practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.[7]

With the increasing popularity of smartphones, the general public are becoming more comfortable with the more complex modalities. Motion and orientation are commonly used in smartphone mapping applications. Speech recognition is widely used with Virtual Assistant applications. Computer Vision is now common in camera applications that are used to scan documents and QR codes.

Using multiple modalities

Having multiple modalities in a system gives more affordance to users and can contribute to a more robust system. Having more also allows for greater accessibility for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others.

There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.[8]

  • Equivalence: information is presented in multiple ways and can be interpreted as the same information
  • Specialization: when a specific kind of information is always processed through the same modality
  • Redundancy: multiple modalities process the same information
  • Complementarity: multiple modalities take separate information and merge it
  • Transfer: a modality produces information that another modality consumes
  • Concurrency: multiple modalities take in separate information that is not merged

Complementary-redundant systems are those which have multiple sensors to form one understanding or dataset, and the more effectively the information can be combined without duplicating data, the more effectively the modalities cooperate. Having multiple modalities for communication is common, particularly in smartphones, and often their implementations work together towards the same goal, for example gyroscopes and accelerometers working together to track movement.[8]

See also

References

  1. Karray, Fakhreddine; Alemzadeh, Milad; Saleh, Jamil Abou; Arab, Mo Nours (March 2008). "Human-Computer Interaction: Overview on State of the Art" (PDF). International Journal on Smart Sensing and Intelligent Systems. 1 (1). Archived from the original (PDF) on April 30, 2015. Retrieved April 21, 2015.
  2. https://arxiv.org/abs/2301.13823
  3. Palanque, Philippe; Paterno, Fabio (2001). Interactive Systems. Design, Specification, and Verification. Springer Science & Business Media. pp. 43. ISBN 9783540416630.
  4. Ziefle, M (December 1998). "Effects of display resolution on visual performance". Human Factors. 40 (4): 554–68. doi:10.1518/001872098779649355. PMID 9974229.
  5. Williams, J. R. (1998). Guidelines for the use of multimedia in instruction, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, 1447–1451
  6. "Braille". ACB. American Council of the Blind. Retrieved 21 April 2015.
  7. Bainbridge, William (2004). Berkshire Encyclopedia of Human-computer Interaction. Berkshire Publishing Group LLC. p. 483. ISBN 9780974309125.
  8. Grifoni, Patrizia (2009). Multimodal Human Computer Interaction and Pervasive Services. IGI Global. p. 37. ISBN 9781605663876.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.