Open Access Peer-Reviewed
Artigo Original

Reaching for virtual objects: binocular disparity, retinal motion and the control of prehension

O alcance de objetos virtuais: disparidade binocular, movimento retínico e o controle da preensão

Mark F. Bradshaw; Paul B. Hibbard

DOI: 10.1590/S0004-27492003000600007

ABSTRACT

To reach for and grasp an object, its distance, shape and size must be known. In principle, the combination of disparity and motion information could be used to provide this information as the perception of object shape from disparity is biased and the perception of object size from motion is indeterminate. Here we investigate whether the visual system can take advantage of the simultaneous presence of both cues in the control of reaching and grasping. For both real and virtual objects, peak grip aperture scaled with object size and peak wrist velocity scaled with object distance. Kinematic indices, which reflect distance reached and perceived size, showed clear and systematic biases. These biases may be interpreted as arising from the biases in the use of binocular disparity, and the indeterminacy of the information provided by motion. Combining disparity and motion information improved estimates of the width, but not the depth or distance of objects. Overall, these results suggest that accurate metric depth information for the control of prehension is not available from binocular or motion cues, either in isolation or in combination.

Keywords: Prehension; Binocular disparity; Distance perception; Size perception

RESUMO

Para alcançar e pegar um objeto, sua distância, sua forma e seu tamanho devem ser conhecidos. A princípio, a combinação da informação de disparidade e de movimento poderiam ser empregadas para fornecer estas informações, tal que a percepção de forma do objeto a partir da disparidade apresenta vieses particulares e a percepção de tamanho do objeto a partir do movimento é indeterminada. Aqui, investigamos se o sistema visual pode se aproveitar da presença simultânea de ambos indícios no controle do alcance e da preensão. Para objetos reais e virtuais, o pico de abertura da empunhadura escalonou-se com o tamanho do objeto e o pico de velocidade do pulso escalonou-se com a distância do objeto. Os índices cinemáticos, que refletem a distância alcançada e o tamanho percebido, mostraram vieses sistemáticos claros. Estes vieses podem ser interpretados como surgindo a partir dos vieses do uso da disparidade binocular, e da indeterminação da informação fornecida pelo movimento. Combinando a informação de disparidade e de movimento melhorou as estimativas de largura, mas não a profundidade ou distância dos objetos. De modo geral, estes resultados sugerem que a informação de profundidade métrica acurada para o controle da preensão não está disponível a partir dos indícios binocular e de movimento, seja em isolamento ou quando combinados.

Descritores: Preensão; Disparidade binocular; Percepção de distância; Percepção de tamanho


 

 

INTRODUCTION

Binocular disparity and relative motion are primary sources of visual information about the 3-D world. It has long been established that both cues can evoke independently a strong impression of depth structure(1,2) and determine relative depth with immense precision(3). The degree to which either cue supports veridical judgments of an object's size and depth, however, has been questioned in a recent, well-focused sequence of experiments on depth constancy and shape perception(3-11). The general conclusion of these experiments is that depth constancy is considerably less than perfect, perceived shape is distorted and absolute distance is typically misestimated. This occurs despite the fact that, in principle, sufficient information is present to support a veridical percept. To recover veridical depth structure, initial measurements of horizontal binocular disparity and retinal image motion must be scaled using additional (e.g. extra-retinal) information or, if both cues are available simultaneously, this scaling problem can be avoided as there is sufficient information in the retinal image to determine structure and viewing distance uniquely through the combination of the information that each cue provides(3,10,12,13). Whether perceptual performance improves when both cues are available simultaneously remains uncertain(3,14-17) see also Landy & Brenner(18) for a recent review).

These apparent shortcomings in the perception of 3-dimensional properties may have limited behavioural significance however because there are many perceptual tasks that do not require the recovery of metric structure and so can proceed successfully without it. For example, if the magnitude of the disparities (or relative motion) is known, then bas-relief structure can be recovered which is sufficient for object recognition and depth matching tasks(11,19-22). Therefore it is conceivable that the psychophysical results reviewed above, although surprising intuitively, may simply reflect the fact that the visual system does not expend computational effort on perceptual tasks where it is not strictly necessary. If this 'task-dependent' view of visual processing is correct, then perceptual tasks alone (including many paddle- or pointer-based responses) cannot provide a comprehensive account of the ability of the visual system to recover metric 3-dimensional structure on the basis of single or multiple cues.

A task for which veridical 3-D information about size, depth and distance is required is in the planning of natural prehensile movements(23-25). In order to prepare to reach for and grasp an object in the world, information is required to plan the activity of particular muscle groups which lift and transport the hand to a specific 3-D location and pre-shape the grip by the appropriate amount. For this task, the computational expense involved in the precise measurement and calibration of binocular disparity and retinal motion information may be justified as visual information about the correct distance and 3-D shape is essential for the selection of the most efficient motor programs when the cost of any error may be high. Consistent with this idea is evidence that suggests that the visual system may transform information differently when faced with a visuo-motor task as opposed to a perceptual task(26-31). Indeed, in distinguishing between putative perception and action systems, Milner and Goodale(31) speculated that the action system may be distinguished from its perceptual counterpart by its recovery of metric information about the world.

The goal of the current study, therefore, was to assess whether the recovery of depth, size and distance information from binocular disparity and relative motion is subject to similar distortions to those noted in psychophysical experiments when the observer is required to perform an action-based (natural prehension) task. To do this, we determined the ability of subjects to recover depth, size and distance on the basis of (i) binocular disparity, (ii) relative motion and (iii) binocular disparity and relative motion in the control of natural prehensile movements. Given the inherent need for veridical information, such a task may form a more appropriate test of the disparity-motion cue-combination model proposed by Richards(12). Both two-frame and multi-frame motion were used. The two-frame motion condition is a critical test of the cue-combination hypothesis as in this situation neither disparity or motion information is sufficient, in isolation, to determine the correct shape of the object but, if combined, veridical estimates of the parameters of interest can be determined(14). We have recently reported that metric structure is not recovered accurately from binocular disparity alone in order to support prehension(32).

The dependent measure of prehension lends itself naturally to assessing the subject's estimate of an object's size and distance in addition to assessing their estimate of depth which is the main dependent measure of most, but not all, psychophysical studies(16). In the present experiment, subjects were asked to pick up the object either from front-to-back, to provide an estimate of perceived depth, or from side-to-side, to provide an estimate of perceived size/width. The fact that the visual depth cues require an estimate of distance to determine the correct depth, and the use of retinal size requires an estimate of distance to determine physical size allows us to address the important question of whether a common estimate of distance is used in each case(9) and, in turn, whether this is the same as that used to program the transport component of the reach.

Virtual, disparity- and motion-defined objects were used in the present experiment. In any study of natural prehensile movements, the participant usually reaches for, and grasps the target object successfully and so a kinematic analysis is required to reveal the effects of the experimental manipulations. Therefore here we determine the peak-wrist velocity and peak-grip aperture which are indirect indices of perceived distance and perceived size(30,33-34). To relate these indices to the physical dimensions of distance and size (and so we can make explicit comparisons to the perceptual tasks featured in the literature) we have also included real objects which were specified by the full range of visual cues. These objects were grasped in near identical experimental conditions as the virtual objects and so the results from this condition will be used as a reference in the interpretation of the kinematic indices.

It is important at this stage to emphasise that, although the current study was clearly inspired by perceptual studies showing biases in the perception of three-dimensional information from disparity and motion cues, the goal is not to make a direct comparison between performance in the two domains. Rather, the aim is to establish the extent to which binocular and retinal-motion cues provide accurate metric depth information when the task is the control of prehensile movements.

 

METHOD

Participants

Nine right-handed adult volunteers participated in the experiment. All participants had normal or corrected-to-normal vision and had stereo acuity scores < 40 arc sec (Randot stereo-test, Stereoptical co., Chicago). Participation was voluntary.

Apparatus and stimuli

Participants sat at a matt black table with their head position maintained using a chin rest. Eye-level was fixed at 17 cm above the tabletop. The start point for each trial was a 2 cm diameter start button mounted on the table top along the body midline. The task was to reach out and pick up objects that were placed (and projected to appear) along the tabletop.

Virtual objects were defined by presenting random dot stereograms on a 19" flat-screen computer monitor that was positioned at a distance of 46.7 cm from the observer, orthogonal to the body midline, and viewed through a semi-silvered mirror set at an angle of 45º to the median plane. The resolution of the monitor was 800 x 600 pixels, and the refresh rate was 120Hz. The left and right eye's images were presented separately using CrystalEyes LCD shutter-glasses. Three objects were used, all of which were 9 cm tall elliptical cylinders, with diameters of 3.2 x 5.0 cm, 5.0 x 5.0 cm, or 7.4 x 5.0 cm. Objects were placed with their shorter diameter either along or orthogonal to the midline, giving three object widths and three object depths. The virtual cylinders were defined by gaussian blobs, with a standard deviation of 1mm, placed on its surface with a density of 1 dot cm-2. The position of each dot in the left and right eye's image was determined using a standard ray-tracing technique. In all conditions, the virtual images were viewed binocularly. For the twoframe and multiframe motion conditions, the vergence angle required to fuse the left and right eye's images was appropriate with the distance to the cylinder, but no there were no differences between the disparities of the points on the surface (i.e. binocular information was consistent with a planar surface presented at the distance of the cylinder). Depth information was provided by a rotation of the cylinder about a vertical axis. Objects were placed at 30 or 50 cm from the start-switch. In the twoframe motion condition, the cylinder rotated through an angle of 5 deg, with an interframe interval of 200 ms; and for the multiframe motion condition, the cylinder rotated through an angle of 16.4 degrees, with a speed of 20.5 deg/sec.

In the real object viewing condition, objects were illuminated by a desk lamp in an otherwise dark room, and viewed through the semi-silvered mirror. The objects were painted black, and covered in randomly positioned white blobs with a diameter of »3 mm and a density of 1 dot cm-2.

Design and Procedure

There were five blocks of trials in which participants viewed virtual objects that were defined by either (i) binocular disparity (ii) twoframe motion (iii) multiframe motion, (iv) binocular disparity and twoframe motion, or (n) binocular disparity and multiframe motion. An additional block was included which presented (vi) real objects in the same experimental rig but in a fully lit laboratory environment. Each block consisted of 36 trials (2 distances x 3 widths x 3 repetitions, plus 2 distances x 3 depths x 3 repetitions). Participants were instructed to pick up the objects with the thumb and forefinger of their right hand, grasping either the left and right side of the object (i.e. width) or the front and back of the object (i.e. depth), as instructed on each trial. Objects were viewed for 2s, after which time a short beep was heard. Participants reached out and picked up the objects as soon as they heard the beep. When the participant's hand moved off the start switch, the desk lamp was switched off, so that all reaches were performed open loop, with no visual feedback. The binocularly defined virtual object condition was performed in a dark room, and an occluder was positioned behind the semi-silvered mirror so that the real objects could not be seen.

Data Analysis

Three spherical infrared reflective markers were attached to the thumbnail, the nail of the forefinger and the head of the radius of the wrist of the right hand. Positions of the wrist, forefinger and thumb markers were recorded by a three-camera Macreflex motion analysis system operating at 120Hz. These data were filtered using a zero-phase filtering algorithm with a cut-off frequency of 12Hz(35), and the peak velocity of motion of the wrist, and the peak grip aperture (greatest separation between the thumb and index finger) were derived. These kinematic indices were chosen as they have been shown in a number of previous studies to scale with the distance and size of objects, respectively(24,36). The accuracy of the Macreflex system was assessed using a procedure based on that of Haggard and Wing(37). The standard deviation of measurements was found to be < 0.3mm.

 

RESULTS

Peak wrist velocity

Individual mean values were calculated for each object by distance combination and entered into a 6 x 2 x 3 x 2 [stimulus type x object distance x object size x grasp orientation (width or depth)] analysis of variance. Typical of reaching experiments, the peak velocity of wrist movements was found to increase with increasing object distance (F1,7=66.7; p<0.001). This is illustrated in figure 2 which depicts the average peak velocity attained at each distance and for each experimental condition.

 

 

 

 

A main effect of viewing condition was also found (F5,35=3.87; p<0.01). Planned comparisons revealed that peak wrist velocities were slower than those exhibited for real objects in all conditions containing object motion. Reaches were also slower when reaching to grasp across the width rather than the depth of objects (F1,7=17.695; p<0.005). An interaction between condition and distance was also found (F5,35=5.637; p<0.001). As shown in figure 2, this indicates that less scaling of peak velocity with distance was found in all cases for virtual objects than for real objects, resulting from slower reaches to the far distance in the virtual conditions.

Peak grip aperture

Individual mean values were calculated for each object by distance combination and entered into a 6 x 2 x 3 x 2 (stimulus type x object distance x object size x grasp orientation) analysis of variance. Figure 3 plots peak grip aperture as a function of object size for each experimental condition. Peak grip aperture increased with increasing object size (F2,16=52.1; p<0.001). Peak grip aperture was also greater at the further distance than at the closer distance (F1,8=13.8; p<0.01). Peak grip was also affected by the viewing condition (F1,40=150.9; p<0.001). As shown in figure 3a, this results from increased grip apertures when object size was defined by motion cues only. Those conditions in which shape was also defined by binocular disparity (either alone or in conjunction with motion) did not differ overall from grips exhibited for real objects (figure 3b).

 

 

Differences between grip apertures in the real and virtual conditions are summarized in figure 4, which plots grip apertures relative to those exhibited for real objects for each virtual condition. Results are plotted separately for reaches across the width and depth of objects, and for different distances, in each case averaged across different objects sizes.

 

 

The scaling of grip aperture with object distance differed between conditions, as indicated by a condition x distance interaction (F5,40 =3.9; p<0.01). Planned comparisons revealed that grip scaling differed significantly from that for real objects for the binocular static and binocular two-frame motion stimuli. Interestingly, no differences were observed for motion defined stimuli, or for stimuli defined by disparity and multiframe motion. This was however restricted to reaches across the widths of objects, as indicated by a 3 way (condition by distance by grip orientation) interaction (F5,40=177.5; p<0.05).

A condition by size interaction was also found. Again, this was analysed further using planned comparisons to determine which conditions differed significantly to the real object condition. As previously reported(32), less scaling of grip aperture with object size was found for disparity defined objects than for real objects; a similar effect was found for objects defined by multiframe motion, but not for other conditions. Interestingly, grips across the widths of objects defined by disparity and multiframe motion in conjunction did not differ significantly from that for real objects, either in the overall magnitude of grip apertures, or in the effects of object size or object distance. This was not true when grasping across the depth of the objects.

To illustrate the extent of depth and size constancy in the grip apertures we follow convention and calculate 'estimated scaling distances' as described previously by Glennerster et al(11). This also allows an explicit comparison with the results from perceptual ACC tasks reported elsewhere(37). The estimated scaling distance is defined as the distance at which a real object would have to be placed to elicit the same grip-aperture as manifest for the size or depth task in each virtual object condition. This conversion was achieved by performing a regression of peak grip aperture against object size for the real objects, and using the resulting equation to convert the peak grips in the virtual conditions into 'notional object sizes' (mm). These could then be transformed into estimated scaling distances. The converted settings are plotted in figure 5. (Note that due to the relatively large grip apertures exhibited in the twoframe and multiframe motion conditions this transformation could not be completed for these conditions.)

 

 

The slopes of these size and depth scaling graphs are informative. The dashed line (slope of 1) depicts perfect depth or size constancy whereas a slope of zero would indicate that object distance was not taken into account when determining perceived size and shape. Clearly the scaling of depth on the basis of binocular disparity information is not affected by the presence of relative motion information. This is true in both the multiframe and twoframe conditions. For width however, the story is somewhat different as the slopes are much greater (and close to 1) in both the combined cue conditions.

The slopes of these scaling graphs are summarised in figure 6 which also shows a typical scaling slope obtained using the ACC task, and stimuli defined by binocular disparity(37).

 

 

DISCUSSION

The use of binocular and motion cues to scale prehension

The present experiment was designed to determine the degree to which depth and size constancy were maintained, on the basis of disparity and retinal motion information presented separately and in combination, for the control of natural prehensile movements. Such a visuo-motor task was chosen as it requires the recovery of veridical metric information and so poses a critical test of the visual system's ability to deliver such information and of its ability to combine different visual cues for this purpose.

For both real and virtual objects, peak wrist velocity scaled with object distance, and peak grip aperture scaled with object size (both width and depth) showing that information about both location and size was readily available for these objects. This was true whether the objects were defined by disparity, motion or a combination of the two cues. These results demonstrate clearly that each of these sources of depth information is employed in the control of prehension. Information about object distance was also provided by convergence angle and the height of the object in the visual scene, both of which have been shown to be important in the control of prehension(38-40). The fact that grip apertures were significantly larger for motion-defined stimuli, when presented in isolation, than for real objects or disparity defined objects may either result from an overestimation of size and depth, or from participants adopting a 'conservative strategy' as a consequence of a greater uncertainty about the target object's dimensions in these conditions. Such a conservative strategy is usually characterised by wider grip apertures and by reaching less far in order to build in a greater margin for error in the end point of the reach(36,41). Consistent with this interpretation, peak velocity was significantly slower in the motion-only conditions. This conservative strategy would be warranted in the estimation of absolute width and depth from motion information alone. This would require information about the three-dimensional motion of the object (in this case, its speed of rotation). Such information is available only by considering either spatial or temporal variations in velocity; perceptual studies have shown that we cannot readily utilize this information(42) and a similar picture may be true here also.

In the estimation of size and depth (i.e. grasping side-to-side, or front-to-back) on the basis of disparity information alone, the results seemed very similar to those previously reported based on standard psychophysical procedures such as the ACC task(4,37) and size and depth settings(3). When disparity and motion cues were available together, in either the twoframe or multiframe condition, there was no discernible advantage for the recovery of depth information. The estimates of width, however, did significantly improve. This suggests that a more reliable estimate of viewing distance is recovered in the combined-cue condition but that it is only used effectively for the estimation of object width. Presumably given that angular size is available directly, the improved performance under the combined cue condition is attributable to a more reliable estimate of viewing distance being available.

Consistency of distance and size estimates

A further question that may be addressed is the extent to which the various estimates of object properties are constrained to be mutually consistent. That is, are the estimates of width, depth and distance for a given object and location consistent, given the visual information that is available. In the current context, this may best be answered by considering how each of these estimates is affected by viewing distance. As demonstrated in figure 4, object width and depth for disparity defined objects were overestimated at the near distance, but accurate at the far distance, when compared to reaches made for real objects. This is consistent with the scaling of image information with an overestimation of object distance at the near distance, and an accurate estimate at the far distance. In contrast, peak wrist velocities are consistent with an accurate estimate of distance at the near distance, and an underestimation of distance at the far distance. Thus, whereas object width and depth appear to be derived using the same estimate of object distance, this does not appear at first sight to be consistent with the estimate of distance that is used to control the transport component of the reach. However, before reaching such a conclusion, it is necessary to take account of the possibility that an overall bias in grip aperture or wrist velocity, combined with consistent distance scaling might explain these discrepant results. Either an overall increase in grip aperture, or an overall slowing of the reaching movement, for virtual versus real objects, could explain the apparent discrepancy between the effects of distance on reaching and grasping. While no firm conclusions can be drawn either way on the basis of these results, this latter possibility is consistent with the adoption of a "conservative strategy", providing both an increase in grip aperture and a slowing of the reach movement. This is consistent with the suggestion made by Rogers and Bradshaw(9) that perceptual estimates of size and distance might be mutually consistent if it accepted that an incorrect "standard" is adopted when making judgments. Equally, however, it has been suggested that size and shape are not constrained to be mutually consistent(17). This would also be consistent with the suggestion that reaching and grasping are controlled relatively independently(24,33,43), and might rely on independent representations of objects.

Cue combination

In the "motion only" conditions, virtual objects were in fact viewed binocularly, and binocular disparity was consistent with a flat, planar object. In the "combined cue" stimuli, disparity was consistent with the appropriate object shape. Simple weighted averaging of information would thus lead us to predict greater grip apertures in the latter case, when in fact we observed smaller grip apertures. The results are consistent with either a vetoing of motion information or modified weak fusion. When shape was defined by both disparity and multi-frame motion, grasps across the width (but not the depth) of object did not differ from those in the real-object condition. The combination of disparity and motion was thus not wholly effective in providing veridical shape and size information. Equally, peak wrist velocity was not affected by the combination of binocular and motion cues, even where this led to improved control of grasping. Again, this is consistent with a relatively independent scaling of size and distance information(18).

Summary

The current results provide clear evidence that both binocular and motion cues provide depth information for the control of prehension. In each case, grip apertures scaled appropriately with increasing depth. However, the results reveal clear limitations in the information provided by each of these cues. Firstly, grip apertures were considerably increased for objects whose shape was defined by motion, consistent with a low confidence in the information provided by this cue. Secondly, clear biases were obvious in other cases, in which shape was defined by binocular disparity, consistent with the scaling of this information by estimated distance. These biases were not removed by the combination of disparity and motion information.

One of the aims of the current study was to investigate the combination of disparity and motion information in a task that is presumed to require accurate, unbiased metric depth information. Even in this case, biases were evident – the combination of disparity and motion improved the estimation of the width, but not the depth of objects. We might therefore question the need for metric depth information (which is available relatively straightforwardly simply from the retinal information) even in this case. In natural viewing conditions, it is possible that such a level of accuracy is not necessary. The information provided by disparity or motion is sufficient to allow objects to be ordered in terms of their size (illustrated in the scaling of grip aperture that was in all cases observed); small biases of the type observed may easily be corrected for by the use of online feedback(44). It is also important to note that these biases are not "errors" as such; rather than leading to a failure to perform the task, they will simply lead to performance that is not as efficient as possible. These may then be of no consequence if the implications of such inefficiency are evaluated, and weighed against the decreased risk of erroneous performance gained by the biases observed.

 

ACKOWLEDGEMENTS

This work was supported by the Wellcome Trust.

 

REFERENCES

1. Julesz B. Binocular depth perception of computer generated patterns. Bell Systems. Technical. Journal 1960;39:1125-62.         

2. Wallach H, O'Connell DN. The kinetic depth effect. J Exp Psychol 1953;45: 205-17.         

3. Bradshaw MF, Parton AD, Glennerster A. The task-dependent use of binocular disparity and motion parallax information, Vision Res 2000;40:3725-34.         

4. Johnston EB. Systematic distortions of shape from stereopsis. Vision Res 1991;31:1351-60.         

5. Collett TS, Schwarz U, Sobell E. The Interaction of oculomotor cues and stimulus size in stereoscopic depth constancy. Perception 1991;20:733-54.         

6. Todd JT, Tittle JS, Norman JF. Distortions of 3-dimensional space in the perceptual analysis of motion and stereo. Perception 1995;24:75-86.         

7. Rogers BJ, Bradshaw MF, Vertical disparities, differential perspective and binocular stereopsis. Nature 1993;361:253-25.         

8. Durgin FH, Proffitt DR, Olson JT, Reinke KS. Comparing depth from binocular disparity to depth from motion. J Exp Psychol Hum Percept Perform 1995;21:679-99.         

9. Rogers BJ, Bradshaw MF. Disparity scaling and the perception of fronto-parallel surfaces. Perception 1995a;24:155-80.         

10. Bradshaw MF, Glennerster A, Rogers BJ. The effect of display size on disparity scaling from differential perspective and vergence cues. Vision Res 1996;36:1255-65.         

11. Glennerster A, Rogers BJ, Bradshaw MF. Stereoscopic depth constancy depends on the subject's task. Vision Res 1996;36:3441-56.         

13. Cumming BG, Johnston EB, Parker AJ. Vertical disparities and perception of 3-dimensional shape. Nature 1991;349:411-3.         

12. Richards W. Structure from motion and stereo, J Opt Soc Am A 1985;2:343-9.         

14. Johnston EB, Cumming BG, Landy MS. Integration of stereopsis and motion shape cues. Vision Res 1994;34:2259-75.         

15. Tittle JS, Todd JT, Perotti VJ, Norman JF. Systematic distortion of perceived three dimensional structure from motion and binocular stereopsis. J Exp Psychol Hum Percept Perform 1995;21:663-78.         

16. Bradshaw MF, Parton AD, Eagle RA. The interaction of binocular disparity and motion parallax in determining perceived depth and perceived size. Perception 1998;27:1317-31.         

17. Brenner E, van Damme WJM. Perceived distance, shape and size. Vision Res 1999;39:975-86.         

18. Landy MS, Brenner E. Motion-disparity interaction and the scaling of stereoscopic disparity. In: Harris LR, Jenkin MRM, editors, Vision and Attention (pp. 129-151). New York: Springer Verlag; 2001; p.129-51.         

19. Rogers BJ, Cagenello RB. Disparity curvature and the perception of three dimensional surfaces. Nature 1989;339:135-7.         

20. Rogers BJ, Bradshaw MF. Are different estimates of 'd' used in disparity scaling. Invest Ophthalmol Vis Sci 1995b;36:230.         

21. Gårding, Porrill J, Mayhew JEW, Frisby JP. Stereopsis, vertical disparity and relief transformations. Vision Res 1995;35:703-22.         

22. Bulthoff HH, Edelman SY, Tarr MJ. How are 3-dimensional objects represented in the brain? Cereb Cortex 1995;5:247-60.         

23. Paillard J. The multichannelling of visual cues and the organisation of a visually guided response. In: Stelmach GE, Requin J, editors, Tutorials in Motor Behaviour. North-Holland Publishing Company; 1980.         

24. Jeannerod M. The neural and behavioural organization of goal-directed movements. Oxford: Clarendon Press, 1988.         

25. Servos P, Goodale MA, Jackobson LS. The role of binocular vision in prehension: a kinematic analysis. Vision Res 1992;32:119-27.         

26. Bridgeman B, Kirch M, Sperling A. Segregation of cognitive and motor aspects of visual function using induced motion. Percept Psychophys 1981; 29:336-42.         

27. Goodale MA, Milner AD, Jakobson LS, Carey DP. A neurological dissociation between perceiving objects and grasping them. Nature 1991;349:154-6.         

28. Aglioti S, Goodale MA, DeSouza JFX. Size-contrast illusions deceive the eye but not the hand, Curr Biol 1995;5:679-85.         

29. Gentilucci M, Chieffi S, Daprati E, Saetti MC, Toni I. Visual illusion and action. Neuropsychologia 1996;34:369-76.         

30. Bradshaw MF, Watt SJ Pre-movement delay allows a dissociation between visuo-motor and visuo-perceptual judgements in normal subjects. Neuropsychologia 2002;40:1766-78.         

31. Milner AD, Goodale MA. The visual brain in action, Oxford: Oxford University Press, 1995.         

32. Hibbard PB, Bradshaw MF. Reaching for virtual objects: binocular disparity and the control of prehension. Exp Brain Res 2002 (accepted).         

33. Jeannerod M. The timing of natural prehension movements. J Mot Behav 1984;16:235-54.         

34. Jeannerod M, Decety J. The accuracy of visuomotor transformation. An investigation into the mechanisms of visual recognition of objects. In: Goodale MA, editor, Vision and Action: the control of grasping. Norwood: New Jersey, 1990: p.33-48.         

35. Oppenheim AV, Shafer RW. Discrete-Time Signal Processing. Englewood Cliffs: Prentice Hall, 1989.         

36. Watt SJ, Bradshaw MF. Binocular cues are important in controlling the grasp but not the reach in natural prehension movements. Neuropsychologia 2000; 38:1473-81.         

37. Hibbard PB, Bradshaw MF. Isotropic integration of binocular disparity and relative motion in the perception of three-dimensional shape. Spat Vis 2002; 15:205-19.         

38. Mon-Williams M, Dijkerman HC. The use of vergence information in the programming of prehension, Exp Brain Res 1999;128:578-82.         

39. Watt SJ, Bradshaw MF. The role of binocular disparity and motion parallax in reaching to grasp objects. J Exp Psychol Hum Percept Perform 2002 (in press).         

40. Marotta JJ, Goodale MA. The role of learned pictorial cues in the programming and control of grasping, Exp Brain Res 1998;121:465-70.         

41. Wing AM, Turton A, Fraser C. Grasp size and accuracy of approach in reaching. J Mot Behav 1986;18:245-60.         

42. Hogervorst MA, Eagle RA. The role of perspective effects and accelerations in perceived three-dimensional structure-from-motion. J Exp Psychol Hum Percept Perform 2000;26:934-55.         

43. Sakata H, Taira M. Parietal control of hand action. Curr Opin Neurobiol 1994;4:847-56.         

44. Bradshaw MF, Elliot KM The role of binocular information in the 'on-line' control of prehension. Spat Vis, Special Issue (submitted).         

 

 

Correspondence to
Mark F. Bradshaw
Department of Psychology, University of Surrey
Guildford, Surrey, GU2 7XH, UK
email: [email protected]


Dimension

© 2024 - All rights reserved - Conselho Brasileiro de Oftalmologia