Tracking nonliteral language processing using audiovisual scenarios

Can J Exp Psychol. 2021 Jun;75(2):211-220. doi: 10.1037/cep0000223. Epub 2021 Apr 1.

Abstract

Recognizing sarcasm and jocularity during face-to-face communication requires the integration of verbal, paralinguistic, and nonverbal cues, yet most previous research on nonliteral language processing has been carried out using written or static stimuli. In the current study, we examined the processing of dynamic literal and nonliteral intentions using eye tracking. Participants (N = 37) viewed short, ecologically valid video vignettes and were asked to identify the speakers' intention. Participants had greater difficulty identifying jocular statements as insincere in comparison to sarcastic statements and spent significantly more time looking at faces during nonliteral versus literal social interactions. Finally, participants took longer to shift their attention from one talker to the other talker during interactions that conveyed literal positive intentions compared with jocular and literal negative intentions. These findings currently support the Standard Pragmatic Model and the Parallel-Constraint-Satisfaction Model of nonliteral language processing. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

MeSH terms

  • Cues*
  • Humans
  • Intention
  • Language*