When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual

  • PDF / 3,178,260 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 17 Downloads / 194 Views

DOWNLOAD

REPORT


When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention Gwendolyn Rehrig 1 & Taylor R. Hayes 2 & John M. Henderson 1,2 & Fernanda Ferreira 1

# The Psychonomic Society, Inc. 2020

Abstract The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in attentional guidance is modulated by verbal encoding, given that we often use language to process information. In two experiments, 60 subjects studied scenes (N1 = 30 and N2 = 60) for 12 s each in preparation for a scene-recognition task. Half of the time, subjects engaged in a secondary articulatory suppression task concurrent with scene viewing. Meaning and saliency maps were quantified for each of the experimental scenes. In both experiments, we found that meaning explained more of the variance in visual attention than image salience did, particularly when we controlled for the overlap between meaning and salience, with and without the suppression task. Based on these results, verbal encoding processes do not appear to modulate the relationship between scene meaning and visual attention. Our findings suggest that semantic information in the scene steers the attentional ship, consistent with cognitive guidance theory. Keywords Scene processing . Visual attention . Meaning . Salience . Language

Introduction Because the visual world is information-rich, observers prioritize certain scene regions for attention over others to process scenes efficiently. While bottom-up information from the stimulus is clearly relevant, visual attention does not operate in a vacuum, but rather functions in concert with other cognitive processes to solve the problem at hand. What influence, if any, do extra-visual cognitive processes exert on visual attention? Two opposing theoretical accounts of visual attention are relevant to the current study: saliency-based theories and cognitive guidance theory. According to saliency-based theories (Itti & Koch, 2001; Wolfe & Horowitz, 2017), salient scene

* Gwendolyn Rehrig [email protected] 1

Department of Psychology, University of California, One Shields Ave., Davis, CA 95616-5270, USA

2

Center for Mind and Brain, University of California, Davis, CA, USA

regions – those that contrast with their surroundings based on low-level image features (e.g., luminance, color, orientation) – pull visual attention across a scene, from the most salient location to the least salient location in descending order (Itti & Koch, 2000; Parkhurst, Law, & Niebur, 2002). Saliencybased explanations cannot explain that physical salience does not determine which scene regions are fixated (Tatler, Baddeley, & Gilchrist, 2005) and that top-down task demands influence attention more than physical salience does (Einhä