Follow
Yupei Chen
Yupei Chen
Smith-Kettlewell Eye Research Institute
Verified email at ski.org
Title
Cited by
Cited by
Year
Predicting goal-directed human attention using inverse reinforcement learning
Z Yang, L Huang, Y Chen, Z Wei, S Ahn, G Zelinsky, D Samaras, M Hoai
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
822020
Benchmarking gaze prediction for categorical visual search
G Zelinsky, Z Yang, L Huang, Y Chen, S Ahn, Z Wei, H Adeli, D Samaras, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
412019
Coco-search18 fixation dataset for predicting goal-directed attention control
Y Chen, Z Yang, S Ahn, D Samaras, M Hoai, G Zelinsky
Scientific reports 11 (1), 8776, 2021
312021
Predicting goal-directed attention control using inverse-reinforcement learning
GJ Zelinsky, Y Chen, S Ahn, H Adeli, Z Yang, L Huang, D Samaras, ...
Neurons, behavior, data analysis and theory 2021, 2021
182021
Is there a shape to the attention spotlight? Computing saliency over proto-objects predicts fixations during scene viewing.
Y Chen, GJ Zelinsky
Journal of experimental psychology: human perception and performance 45 (1), 139, 2019
172019
Changing perspectives on goal-directed attention control: The past, present, and future of modeling fixations during visual search
GJ Zelinsky, Y Chen, S Ahn, H Adeli
Psychology of learning and motivation 73, 231-286, 2020
162020
Characterizing target-absent human attention
Y Chen, Z Yang, S Chakraborty, S Mondal, S Ahn, D Samaras, M Hoai, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
62022
COCO-Search18: A dataset for predicting goal-directed attention control
Y Chen, Z Yang, S Ahn, D Samaras, M Hoai, G Zelinsky
bioRxiv, 2020.07. 27.221499, 2020
22020
Computing saliency over proto-objects predicts fixations during scene viewing
Y Chen, G Zelinsky
Journal of Vision 17 (10), 209-209, 2017
22017
Adding shape to saliency: A proto-object saliency map for predicting fixations during scene viewing
Y Chen, CP Yu, G Zelinsky
Journal of Vision 16 (12), 1309-1309, 2016
22016
A study of human gaze behavior during visual crowd counting
R Annadi, Y Chen, V Ranjan, D Samaras, G Zelinsky, M Hoai
arXiv preprint arXiv:2009.06502, 2020
12020
A CNN model of objectness predicts fixations during free viewing
Y Chen, G Zelinsky
Journal of Vision 18 (10), 314-314, 2018
12018
COCO-CursorSearch: A large-scale cursor movement dataset approximating eye movement in visual search
Y Chen, G Zelinsky
Journal of Vision 22 (14), 3748-3748, 2022
2022
Eye Movement During Object Search and Its Comparison to Free Viewing
Y Chen
State University of New York at Stony Brook, 2021
2021
Predicting Goal-directed Attention Control Using Inverse Reinforcement Learning and COCO-Search18
Y Chen, G Zelinsky
Journal of Vision 20 (11), 1632-1632, 2020
2020
Multiple-object Control Predicts Movements of Attention During Free Viewing
Y Chen, G Zelinsky
Journal of Vision 19 (10), 269d-269d, 2019
2019
Supplementary Material: Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning
Z Yang, L Huang, Y Chen, Z Wei, S Ahn, G Zelinsky, D Samaras, M Hoai
The system can't perform the operation now. Try again later.
Articles 1–17