Can relevance of images be inferred from eye movements?
Kitsuchart Pasupa1, Arto Klami2, Craig J. Saunders1, Teófilo de Campos3 and Samuel Kaski2 1University of Southampton, School of Electronics & Computer Science, Southampton, UK 2Department of Information and Computer Science, Helsinki University of Technology, Helsinki, Finland 3Xerox Research Centre Europe, Meylan, France Searching for images from a large collection is a difficult task for automated algorithms. Many current techniques rely on items which have been manually ‘tagged’ with descriptors. This situation is not ideal, as it is difficult to formulate the initial query, and navigate the large number of hits returned. In order to present relevant images to the user, many systems rely on an explicit feedback mechanism. A machine learning algorithm can be used to present a new set of relevant images to the user — thus increasing hit rates. In this work we use eye movements to assist a user when performing such a task, and ask this basic question: “Is it possible to replace or com