Attentional focusing and filtering in multisensory categorization

Jianhua Li, Sophia W. Deng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.

Original languageEnglish
Pages (from-to)708-720
Number of pages13
JournalPsychonomic Bulletin and Review
Volume31
Issue number2
DOIs
Publication statusPublished - Apr 2024
Externally publishedYes

Keywords

  • Categorization
  • Cross-modal processing
  • Filtering
  • Focusing
  • Multisensory
  • Selective attention

Fingerprint

Dive into the research topics of 'Attentional focusing and filtering in multisensory categorization'. Together they form a unique fingerprint.

Cite this