Keyword spotting

Keyword spotting (or more simply, word spotting) is a problem that was historically first defined in the context of speech processing.[1][2] In speech processing, keyword spotting deals with the identification of keywords in utterances.

Keyword spotting is also defined as a separate, but related, problem in the context of document image processing.[1] In document image processing, keyword spotting is the problem of finding all instances of a query word that exist in a scanned document image, without fully recognizing it.

In speech processing

The first works in keyword spotting appeared in the late 1980s.[2]

A special case of keyword spotting is wake word (also called hot word) detection used by personal digital assistants such as Alexa or Siri to activate the dormant speaker, in other words "wake up" when their name is spoken.

In the United States, the National Security Agency has made use of keyword spotting since at least 2006.[3] This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of suspicious keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. IARPA funded research into keyword spotting in the Babel program.

Some algorithms used for this task are:

In document image processing

Keyword spotting in document image processing can be seen as an instance of the more generic problem of content-based image retrieval (CBIR). Given a query, the goal is to retrieve the most relevant instances of words in a collection of scanned documents.[1] The query may be a text string (query-by-string keyword spotting) or a word image (query-by-example keyword spotting).

References

  1. Giotis, A.P; Sfikas, G.; Gatos, B.; Nikou, C. (2017). "A survey of document image word spotting techniques". Pattern Recognition. 68: 310–332. Bibcode:2017PatRe..68..310G. doi:10.1016/j.patcog.2017.02.023.
  2. Rohlicek, J.; Russell, W.; Roukos, S.; Gish, H. (1989). "Continuous hidden Markov modeling for speaker-independent word spotting". Proceedings of the 14th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 1: 627–630.
  3. Froomkin, Dan (5 May 2015). "THE COMPUTERS ARE LISTENING". The Intercept. Retrieved 20 June 2015.
  4. Sainath, Tara N; Parada, Carolina (2015). "Convolutional neural networks for small-footprint keyword spotting". Sixteenth Annual Conference of the International Speech Communication Association. arXiv:1711.00333.
  5. Wei, Bo; Yang, Meirong; Zhang, Tao; Tang, Xiao; Huang, Xing; Kim, Kyuhong; Lee, Jaeyun; Cho, Kiho; Park, Sung-Un (30 August 2021). End-to-End Transformer-Based Open-Vocabulary Keyword Spotting with Location-Guided Local Attention (PDF). Interspeech 2021.{{cite conference}}: CS1 maint: date and year (link)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.