Columbia Gaze Data Set
Advancing gaze tracking research with more images than any other gaze data set.

Sample images from the Columbia Gaze Data Set.
Gaze tracking and gaze locking have the potential to fundamentally change how we interact with computers and devices. It is critical, however, for gaze detectors to be trained over many different gaze directions and head poses and for a wide variety of users in order to be accurate. The Columbia Gaze Data Set is a new benchmark for training gaze detectors and evaluating their performance. It comprises 5,880 images of 56 people over 5 head poses and 21 gaze directions, the largest publicly available gaze data set of its time.
The Project

The Columbia Gaze Data Set includes five head poses (top row) and 21 gaze directions (bottom grid; shown here for the 0° head pose).
Detailed Statistics
The data set contains a total of 5,880 high-resolution images of 56 subjects (32 male, 24 female), and each image has a resolution of 5,184 x 3,456 pixels. 21 of its subjects are Asian, 19 are White, 8 are South Asian, 7 are Black, and 4 are Hispanic or Latino. Its subjects range from 18 to 36 years of age, and 21 of them wear prescription glasses.
The data set includes images for each combination of five horizontal head poses (0°, ±15°, ±30°), seven horizontal gaze directions (0°, ±5°, ±10°, ±15°), and three vertical gaze directions (0°, ±10°) for each subject. This means that we collected five gaze locking images (0° vertical and horizontal gaze direction) for each subject: one for each head pose.

Our image capture setup for the Columbia Gaze Data Set.
Collection Procedure
We seated subjects in a fixed location in front of a black background. We captured the five head poses by moving the camera between five different positions, which were each 2 m from the subject.
We attached several grids of dots to a wall in front of the subject (each camera position had a corresponding 7 x 3 grid of dots) and asked subjects to direct their gaze toward each dot in turn. The subjects used a height-adjustable chin rest to stabilize their face and to position their eyes 70 cm above the floor. The camera was at eye height, as was the center row of dots.
Publication

Gaze Locking: Passive Eye Contact Detection for Human–Object Interaction