Influences of luminance contrast and ambient lighting on visual context learning and retrieval
- PDF / 1,125,701 Bytes
- 18 Pages / 595.276 x 790.866 pts Page_size
- 40 Downloads / 183 Views
Influences of luminance contrast and ambient lighting on visual context learning and retrieval Xuelian Zang 1,2 & Lingyun Huang 3 & Xiuna Zhu 4 & Hermann J. Müller 4 & Zhuanghua Shi 4
# The Author(s) 2020
Abstract Invariant spatial context can guide attention and facilitate visual search, an effect referred to as “contextual cueing.” Most previous studies on contextual cueing were conducted under conditions of photopic vision and high search item to background luminance contrast, leaving open the question whether the learning and/or retrieval of context cues depends on luminance contrast and ambient lighting. Given this, we conducted three experiments (each contains two subexperiments) to compare contextual cueing under different combinations of luminance contrast (high/low) and ambient lighting (photopic/mesopic). With high-contrast displays, we found robust contextual cueing in both photopic and mesopic environments, but the acquired contextual cueing could not be transferred when the display contrast changed from high to low in the photopic environment. By contrast, with low-contrast displays, contextual facilitation manifested only in mesopic vision, and the acquired cues remained effective following a switch to high-contrast displays. This pattern suggests that, with low display contrast, contextual cueing benefited from a more global search mode, aided by the activation of the peripheral rod system in mesopic vision, but was impeded by a more local, fovea-centered search mode in photopic vision. Keywords Contextual cueing . Photopic vision . Mesopic vision . Contextual learning and retrieval
The visual system constantly encounters an overwhelming amount of information. To deal with this load, the system structures the information available in the environment and extracts statistical regularities to guide the allocation of focal attention. For example, people are quite proficient at using statistical regularities in a scene to detect and localize “target” objects, such as pedestrians appearing on the sidewalk and cars on the road, requiring particular re-/actions (Wolfe, Võ, Evans, & Greene, 2011). Visual search studies have also
* Zhuanghua Shi [email protected] 1
Institutes of Psychological Sciences, College of Education, Hangzhou Normal University, Hangzhou 311121, People’s Republic of China
2
Center for Cognition and Brain Disorders, Affiliated Hospital of Hangzhou Normal University, Hangzhou, Zhejiang Province 310015, People’s Republic of China
3
Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
4
General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, 80802 Munich, Germany
provided robust evidence that invariant spatial target– distractor relations can be extracted even from relatively abstract displays (rather than meaningful, real-life scenes) and be encoded in long-term memory (Chun & Nakayama, 2000; Goujon, Didierjean, & Thorpe, 2015), expediting search when the learnt layout is reencountered
Data Loading...