Abstract

Purpose: Recently, we developed a Bayesian adaptive qVFM method to measure visual field maps (VFMs) of visual functions such as light sensitivity and contrast sensitivity. In the initial implementations of the qVFM method, observer’s response was obtained through button/key press or verbal report. To simplify the response process and improve the usability of the qVFM method, we implemented it using eye movement response in an 8AFC light detection task.

Methods: Six eyes (3 OS, 3 OD) of three observers were tested using the eye-tracking qVFM methods. Four measurements (200 trials/measurement) of the VFM (36 testing locations, evenly sampled across a visual field of 36°×36°) were obtained from each eye. The target stimulus was a light disc (Goldmann size III) with luminance adaptively adjusted in each trial. In the test, observer was instructed to make a saccade to the target location, and the saccade direction was used to determine the accuracy of the response (correct if within ±22.5°).

Results: The eye-tracking qVFM method can provide an accurate, precise, and efficient assessment of light sensitivity VFM. The average within-run variability (68.2% HWCI) of the eye-tracking qVFM estimate decreased from 3.59 dB on the first trial to .62 dB after 100 trials, and to .50 dB after 200 trials. The repeated run variability of the eye-tracking qVFM estimate was comparable to the HWCIs (.53 dB after 100 trials, and .38 dB after 200 trials).

Conclusions: The eye-tracking qVFM method provides a more user-friendly mapping of light sensitivity.
Translational Relevance: The eye-tracking qVFM method could find potential clinical applications in monitoring vision loss, evaluating therapeutic interventions, and developing effective rehabilitation for people with visual impairment.
Original languageEnglish
JournalJournal of Vision
Publication statusIn preparation - 2023

Fingerprint

Dive into the research topics of 'Mapping visual field with Bayesian-adaptive eye-tracking qVFM method'. Together they form a unique fingerprint.

Cite this