Abstract. Efficient and reliable classification of visual stimuli requires that their representations be low-dimensional, and, therefore, computationally manageable. We have investigated the ability of human subjects to form such representations, when confronted with several classes of solid shapes, arranged in a complex pattern in a common underlying high-dimensional parameter space. Methods. The task in the 4 experiments was delayed match to sample, involving images of computer-rendered 3D animal-like objects, jointly parameterized by 70 variables controlling various details of the object shape. In each trial, the subject had to decide whether two images, shown briefly and consecutively, belonged to the same animal shape (four views per shape were used, corresponding to the vertices of a regular tetrahedron inscribed in the viewing sphere). The subjects (between 5 and 7 per experiment) received no prior training. Results. In each experiment, response time (RT) and error rate (ER) data were combined into a measure of view similarity, and the resulting proximity matrix was submitted to nonmetric multidimensional scaling (MDS). Examining the configuration of points corresponding to the various views in a 2D MDS solution revealed that (1) different views of the same shape were correctly clustered together, and (2) in each experiment, the relative geometrical arrangement of the view clusters of the different objects reflected the structure of the parameter-space pattern (respectively, a star, a triangle, a square, and a line) that defined the relationships between the stimuli classes. In subsequent computational experiments, the latter phenomenon was accounted for to a significant extent by a model based on representation by similarity to prototypical objects. Conclusions. The successful recovery of the true low-dimensional structure of the stimuli space by subjects exposed to a series of complex shapes is compatible with the similarity-space theories of shape representation, and can be used to guide the development of computational models of shape vision based on such theories.
F. Cutzu and S. Edelman, Explorations of shape space (PNAS, 1996).
S. Edelman, Representation of similarity in 3D object discrimination, Neural Computation 7:407-422, 1995.
Blackmore et al.
Abstract. Our construction of a stable visual world, despite the presence of saccades, is discussed. A computer-graphics method was used to explore transsaccadic memory for complex images. Images of real-life scenes were presented under four conditions: they stayed still or moved in an unpredictable direction (forcing an eye movement), while simultaneously changing or staying the same. Changes were the appearance, disappearance, or rotation of an object in the scene. Subjects detected the changes easily when the image did not move, but when it moved their performance fell to chance. A grey-out period was introduced to mimic that which occurs during a saccade. This also reduced performance, but not to chance levels. These results reveal the poverty of transsaccadic memory for real-life complex scenes. They are discussed with respect to Dennett's view that much less information is available in vision than our subjective impression leads us to believe. Our stable visual world may be constructed out of a brief retinal image and a very sketchy higher-level representation along with a pop-out mechanism to redirect attention. The richness of our visual world is, to this extent, an illusion.
S. J. Blackmore, G. Brelstaff, K. Nelson and T. Troscianko, Is the richness of our visual world an illusion? Transsaccadic memory for complex scenes, Perception 24:1075-1081, 1995.