Neil Cohn

Neil Cohn (/kn/; born 1980) is an American cognitive scientist and comics theorist. His research offers the first serious scientific study of the cognition of understanding comics, and uses an interdisciplinary approach combining aspects of theoretical and corpus linguistics with cognitive psychology and cognitive neuroscience.[1][2]

Neil Cohn
Born (1980-01-11) January 11, 1980
Alma materTufts
University of Chicago
UC Berkeley
Known forVisual language theory
Contributions to comics theory and emoji theory
Scientific career
FieldsCognitive science, linguistics, comics studies
InstitutionsTilburg University
Doctoral advisorRay Jackendoff, Gina Kuperberg, Phillip Holcomb
Other academic advisorsMarta Kutas, Jeff Elman
Doctoral studentsBien Klomberg, Irmak Hacımusaoğlu

Cohn’s work argues that common cognitive capacities underlie the processing of various expressive domains, especially verbal and signed languages and what he calls “visual language”—the structure and cognition of drawings and visual narratives, particularly those found in comics. His 2020 book, Who Understands Comics?[3] explored the proficiency required to understand visual narratives, and was nominated for a 2021 Eisner Award for Best Academic/Scholarly Work. His theories on visual language provided the foundation for the creation of automatically generated news comics for the BBC.[4]

Cohn's research has also examined the comprehension and linguistic status of emoji.[5][6][7] He has also helped propose and design several emoji.[8]

Biography

Cohn began developing his theories as an undergraduate at UC Berkeley where he graduated in 2002. He then spent several years as an independent scholar before studying under linguist Ray Jackendoff and psychologists Gina Kuperberg and Phillip Holcomb at Tufts University where he received his PhD in psychology in 2012. He then did a postdoctoral fellowship at UC San Diego working with Marta Kutas and Jeff Elman. In 2016, he joined the faculty of the Tilburg center for Cognition and Communication at Tilburg University. He is the son of Leigh Cohn and Lindsey Hall.

Visual language theory

Cohn’s work challenges many of the existing conceptions of both language and drawing. He argues that language involves an interaction between an expressive modality, meaning, and a grammar. Just as sign languages differ from gestures in that they use a vocabulary and grammar, “visual languages” differ from individual drawings because they have a vocabulary of patterned graphic representations and a grammar constraining the coherence of sequential images. Full visual languages primarily appear alongside written languages in comics of the world, though they also appear outside of comics, such as in sand drawings used by Australian Aboriginals.[9] Just as spoken languages differ, so do visual languages: Japanese manga are written in “Japanese Visual Language” while American comics are written in “American Visual Language.” In addition, Cohn has argued that the development of visual languages may follow similar constraints as learning spoken and signed languages, and that most people do not learn how to draw proficiently because they do not acquire visual vocabularies within a critical period.[10]

Cohn's primary research program with visual language theory emphasizes that a narrative structure operates as a “grammar” to sequential images analogously to syntactic structure in sentences. While narrative grammar uses a discourse level of information, its function and structure is similar to syntax in that it organizes categorical roles in hierarchic constituents in order to express meaning. Cohn’s work in cognitive neuroscience has suggested that manipulation of this narrative grammar elicits similar brain responses as manipulations of syntax in language (i.e. N400, P600, and Left Anterior Negativity effects).[11][12][13]

In 2020, Cohn was awarded a Starting Grant from the European Research Council to study cross-cultural diversity in the structures of the visual languages used in comics around the world by building a multicultural corpus of annotated comics, and to examine the relationship of those structures to those in spoken languages.[14]

Comic authorship

Cohn began working in the comic industry at age 14 by helping to run convention booths for Image Comics and Todd McFarlane Productions throughout his teenage years.[15] Beyond illustrating his academic books, Cohn’s creative work appears in several graphic novels, like We the People: A Call to Take Back America (2004) with Thom Hartmann, and illustrations for academic works, including Ray Jackendoff’s A User’s Guide to Thought and Meaning (2012), and the comic strip “Chinese Room” with philosopher Daniel Dennett.

Selected works

  • Cohn, Neil (2003). Early Writings on Visual Language. Carlsbad, CA: Emaki Productions. p. 120. ISBN 978-0-615-19346-5.
  • Hartmann, Thom & Cohn, Neil (2004). We the People: A Call to Take Back America. Portland, OR: CoreWay Media. pp. 220. ISBN 978-1882109388.
  • Cohn, Neil (2013). The Visual Language of Comics: Introduction to the Structure and Cognition of Sequential Images. London, UK: Bloomsbury. p. 240. ISBN 9781441181459.
  • Cohn, Neil, ed. (2016). The Visual Narrative Reader. London, UK: Bloomsbury. p. 320. ISBN 9781472585592.
  • Cohn, Neil (2020). Who Understands Comics?: Questioning the Universality of Visual Language Comprehension. London, UK: Bloomsbury. p. 256. ISBN 978-1350156043.

References

  1. Zimmer, Carl. 2012. The Charlie Brown Effect: A comic book-artist turned-neuroscientist says the images in Peanuts tap the same brain processes as sentences. Discover Magazine. Pp. 68-70
  2. Robson, David. 2013. How the visual language of comics could have its roots in the ice age. The Guardian. November 23, 2013
  3. Cohn, Neil. 2020. Who Understands Comics?: Questioning the Universality of Visual Language Comprehension. London: Bloomsbury.
  4. Graphical Storytelling Reaching new audiences with short comics about important health stories. BBC News website.
  5. Cohn, Neil. 2015. Will emoji become a new language? BBC Futures. October 12, 2015
  6. Gilmore, Garrett. 2015. Help! I can't stop thinking in emoji! VICE. April 21, 2015
  7. Barrett, Brian. 2016. Facebook messenger finally bridges the great emoji divide Wired Magazine. June 16, 2016
  8. Kambhampaty, Anna P. 2021. The Melting Face Emoji Has Already Won Us Over. The New York Times. September 29, 2021
  9. Cohn, Neil. 2013. The Visual Language of Comics: Introduction to the Structure and Cognition of Sequential Images. London: Bloomsbury.
  10. Cohn, Neil. 2012. Explaining “I can’t draw”: Parallels between the structure and development of language and drawing. Human Development. 55(4): 167-192
  11. Cohn, Neil. 2013. Visual narrative structure. Cognitive Science. 37(3): 413-452
  12. Cohn, Neil, Martin Paczynski, Ray Jackendoff, Phillip Holcomb, and Gina Kuperberg. 2012. (Pea)nuts and bolts of visual narratives: Structure and meaning in sequential image comprehension. Cognitive Psychology. 65(1): 1-38
  13. Cohn, Neil, Ray Jackendoff, Phillip Holcomb, and Gina Kuperberg. 2014. The grammar of visual narratives: Neural evidence for constituent structure in visual narrative comprehension. Neuropsychologia. 64: 63-70.
  14. Tilburg University press release.
  15. Cohen, Georgiana. Drawing Conclusions. Tufts University website. Jan. 26 - Feb. 2, 2009.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.