Guidance of visual search through canonical materials while controlling for low-level features

Fan Zhang, Dietmar Heinke

Research output: Contribution to journalMeeting Abstractpeer-review

Abstract

Wolfe and Myers (2010) reported that materials do not efficiently guide visual search. However, the stimuli surface appearance varied largely for each material category. For example, the appearances of metal may vary from shiny to matte.
In a recent study, Zhang and Heinke (2021) aimed to control for this confounding factor by using canonical materials as stimuli (Zhang et al., 2020). They found efficient search for “specular” among “matte” and inefficient search for “matte” among “specular”. But their findings could have been caused by the differences in low level features (e.g., lightness) between the two materials. To control for low-level features, we created a new set of “matte” stimuli by superimposing the rotated highlights obtained from corresponding “specular” images. In this way, the new “matte” stimuli have the same bright pixels. However, due to the rotation, these bright pixels do not aligned with matte surface’s shading. Hence they may not be perceived as specular highlights. The resulting search task turned out to be very hard. The averaged accuracies were just above chance level. After removing participants who performed below chance, we found inefficient searches for both materials. The search slopes were above 35ms/item for the target present and above 50ms/item for the target absent condition. Even though these results do not support Zhang and Heinke’s previous findings, the current study presents a rigorous approach to test human’s ability to search for materials by systematically varying the appearance of canonical material modes.
Original languageEnglish
Pages (from-to)197-197
JournalPerception
Volume51
Publication statusPublished - 1 Dec 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'Guidance of visual search through canonical materials while controlling for low-level features'. Together they form a unique fingerprint.

Cite this