Spatial Updating and Mental Rotation in Shape Recognition
This experiment concerned the ability to recognise geometric 'non-sense' shapes from novel viewpoints and in general the mental representation of objects.
It has been shown previously that the visual recognition of shape is susceptible to the mismatch between the retinal input and its representation in long-term memory, especially when this mismatch arises from rotations in depth. One possibility is that the visual recognition system deals with such mismatch by a transformation of the input or the representation thereby bringing both into alignment for comparison. In either case, knowing what transformation has taken place should facilitate recognition. In natural circumstances, objects do not disappear and appear in different orientations inexplicably and an observer usually knows what to expect according to the context. This context includes the environment, and the history of the observers’ movements, which specify the transient relationship between the object, the environment and the observer. We used interactive computer graphics to study the effects of providing observers with either implicit or explicit indications of their view transformations in the recognition of a class of shape found previously to be highly view-dependent. Results show that these cues aid recognition to varying degrees but mostly for oblique views and primarily in terms of accuracy not response times. These results provide evidence for egocentric encoding of shape and suggest that knowing ones’ transformation in view (spatial updating) helps to reduce the problem space involved in matching a shape percept with a mental representation.