Acessibilidade / Reportar erro

Experimental validation of a 3-D vision-based measurement system applied to robot calibration

Abstract

One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.

Robot calibration; 3-D machine vision; absolute accuracy; camera calibration; robotic measurement systems


Experimental validation of a 3-D vision-based measurement system applied to robot calibration

J. M. S. T. MottaI; R. S. McMasterII

IUniversidade de Brasília Departamento de Engenharia Mecânica Grupo de Automação e Controle - GRACO 70910-900 Brasília, DF. Brazil. E-mail: jmmotta@unb.br IICranfield University School of Industrial and Manufacturing Science Building 62, Cranfield, MK43 OAL, England, UK. E-mail: R.S.McMaster@cranfield.ac.uk

ABSTRACT

One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.

Keywords: Robot calibration, 3-D machine vision, absolute accuracy, camera calibration, robotic measurement systems

Introduction

Robot calibration plays an increasingly important role in robot production as well as in robot operation and integration within computer integrated manufacturing or assembly systems (Hidalgo & Brunn, 1998). The production, implementation and operation of robots are issues where robot calibration results can lead to significant accuracy improvement and/or cost-savings.

Theoretically, with the availability of robot calibration systems integrated with off-line programming systems, it would be possible to implement off-line programming in industry (currently they are not ready to be used), where multiple robots are used, by programming only one robot and copying the programs from one robot to the others. One of the most evident advantages of this type of programming is the reduction in maintenance expenditure, since the robot can return faster to service and easy reprogramming means the software becomes operational with little effort.

Robot calibration is an integrated process of modeling, measurement, numeric identification of actual physical characteristics of a robot, and implementation of a new model.

The second step of the calibration process is necessary for the determination of the "best fit" set of parameters to the actual robot. The types of measurement required comprise tool positions corresponding to joint positions. Since the fitting cannot be of better quality than that of the original data collection, it is very important to obtain a good set of data.

Cameras and Vision systems have become standard automation components. A typical robotic cell may feature an integrated multiple-camera system for product inspection, part orientation, identifying parts, searching for specific defects, and real-time monitoring of assembly accuracy (Asfahl, 1992).

Vision-based measurement systems using a single camera for robot calibration systems seem to be one of the trends of the development of robot calibration systems today, but the knowledge involved in this area is still restricted to research groups sponsored by large budget companies, with little interest to publish technical details (Albada et al., 1995). For this technology to be available and used extensively in industry, there is still a long way to go.

This work presents the stages of the development of a feasible low cost vision-based measurement system using a single camera placed on a robot tool flange. Experimental results to assess the measurement system accuracy are presented as a step of the investigation of theoretical and practical aspects of the integration between the measurement system and the prototype of a robot calibration system.

Photogrammetric Model

There are basically two typical set-ups for vision-based robot calibration. The first is to fix cameras in the robot surroundings so that the camera can frame a calibration target mounted on the robot end-effector. The other set-up is named hand-mounted camera robot calibration. This latter set-up can use a single camera or a pair of cameras. A single moving camera presents the advantages of a large field-of-view with a potential large depth-of-field, and a considerable reduced hardware and software complexity of the system. On the other hand, a single camera set-up needs full camera re-calibration at each pose.

The goal of camera calibration is to develop a mathematical model of the transformation between world points and observed image points resulting from the image formation process. The parameters which affect this mapping can be divided into three categories (Prescott & McLean, 1997, Zhuang & Roth, 1996) : a) extrinsic (or external) parameters describe the relationship between the camera frame and the world frame, including position (3 parameters) and orientation (3 parameters); b) intrinsic (or internal) parameters describe the characteristics of the camera, and include the lens focal length, pixel scale factors, and location of the image center; c) distortion parameters describe the geometric nonlinearities of the camera. Some authors include distortion parameters in the group of intrinsic parameters (Tsai, 1987, Weng et al., 1992). Distortion parameters can be present in a model or not.

The algorithm developed here to obtain the camera parameters (intrinsic, extrinsic and distortions) is a two-step method based on the Radial Alignment Constraint (RAC) Model (Tsai, 1987). It involves a closed-form solution for the external parameters and the effective focal length of the camera. Then, a second stage is used to estimate three parameters: the depth component in the translation vector, the effective focal length, and the radial distortion coefficient. The RAC model is recognized as a good compromise between accuracy and simplicity, which means short processing speed (Zhuang & Roth, 1996). Some few modifications were introduced to the original RAC algorithm, to be explained through the text.

RAC-Based Camera Model

In Fig. 1 the world coordinate system is {xw, yw, zw}; the camera coordinate system is {x, y, z}. The origin of the camera coordinate system is centered at Oc, and the z-axis coincides with the optical axis. (X, Y) is the image coordinate system at Oi (intersection of the optical axis with the front image plane) and is measured in pixels. (u, v) are the analog coordinates of the object point in the image plane (usually measured in meters or millimeters). (X, Y) lies on a plane parallel to the x and y axes. ƒ is the distance between the front image plane and the optical center.


The rigid body transformation from the object coordinate system (xw, yw, zw) to the camera coordinate system (x, y, z) is

where R is the orthogonal rotation matrix, which aligns the camera coordinate system with the object coordinate system, and it can be represented as

and T is the translation vector represented as

which is the distance from the origin of the camera coordinate system to the origin of the object coordinate system represented in the camera coordinate system. (xi, yi, zi) is the representation of any point (xwi, ywi, zwi) in the camera coordinate system.

The distortion-free camera model, or "pinhole" model, assumes that every real object point is connected to its correspondent image on the image plane in a straight line passing through the focal point of the camera lens, Oc. In Fig. 1 the undistorted image point of the object point, Pu, is shown.

The transformation from 3-D coordinate (x, y, z) to the coordinates of the object point in the image plane follows the perspective equations below (Wolf, 1983):

where ƒ is the focal length (in physical units, e.g. millimeters).

The image coordinates (X,Y) are related to (u, v) by the following equations (Tsai, 1987, Zhuang & Roth, 1996):

where su and sv are scale factors accounting for TV scanning and timing effects and converting camera coordinates in millimeter or meters to image coordinates (X,Y) in pixels. Lenz and Tsai (1987) relate the hardware timing mismatch between image acquisition hardware and camera scanning hardware, or the imprecision of the timing of TV scanning only to the horizontal scale factor su. In Eq. (5), sv can be considered as a conversion factor between different coordinate units.

The scale factors, su and sv, and the focal length, ƒ, are considered the intrinsic model parameters of the distortion-free camera model, and reveal the internal information about the camera components and about the interface of the camera to the vision system. The extrinsic parameters are the elements of R and T, which pass on the information about the camera position and orientation with respect to the world coordinate system.

Combining Eqs. (4) and (5) follows

The equations above combined with Eq. (1) produce the distortion-free camera model

which relates the world coordinate system (xw, yw, zw) to the image coordinate system (X,Y). ƒx and ƒy are non-dimensional constants defined in Eqs. (6) and (7).

Radial distortion can be included in the model as (Tsai, 1987, Weng et al., 1992):

where r = µ.X2 + Y2 and µ is the ratio of scale to be defined further in the text. Whenever all distortion effects other than radial lens distortion are zero, a radial alignment constraint (RAC) equation is maintained. In Fig. 1, the distorted image point of the object point, Pd, is shown.

A mathematical "trick" to linearize the RAC model and find a linear least-squares solution to the camera calibration problem when radial distortion is considered was used by Zhuang & Roth (1993). Then, the RAC model with radial distortion expressed in Eqs. (10) and (11) is changed to

This transformation can be done under the assumption that k.r2 << 1, i.e. when k.r2 << 1. The first stage to solve Eqs. (12) and (13) determines the rotation matrix R (Eq. 2), Ty and Tx. The algorithm for that is found in the Tsai (1987) and Zhuang & Roth (1996). The second stage is proposed here as

where xi = r1.xwi + r2.ywi + Tx , yi = r4.xwi + r5.ywi + Ty, wi = r7.xwi + r8.ywi , and zwi is made null (all calibration points are coplanar). The overdetermined linear system in Eq. (14) can be solved by a linear least-square routine. The calibration points used to solve the system above were on the border of a square grid of 9x9 circular points with 1mm diameter, totting up 32 points. The grid has approximately 200x200mm.

Calibration of the Scale Factors

A ratio of scale is defined as

and dividing Eq. (12) by Eq.(13) yields

Two-step camera calibration techniques such as the RAC model can be used to determine the ratio of scale accurately if more than one plane of calibration points is used. This is accomplished by moving a single-plane calibration set-up vertically with a z-stage (Zhuang & Roth, 1993). However, if only one coplanar set of calibration points is to be used, pre-calibration of the horizontal scale factor is essential.

Based on the fact that using non-coplanar and parallel planes µ can be determined from two-step calibration methods (Tsai, 1987), several images were obtained from different orientations and positions of the camera. This was accomplished when the camera was mounted on the robot hand during robot calibration measurements. Using the RAC model residuals as calculated from Eq. (16) (Lenz & Tsai, 1987):

where i is the index representing each calibration point in the image, and based on the fact that µ and the image center are quite independent of each other in the RAC model (a poor guess for one does not affect the determination of the optimal value for the other) (Zhuang et al., 1993), the residual found for each image were averaged and then µ was searched to minimize the average of the residuals. The solution found was µ = 0.9825.

Calibration of the Image Center

The image center is defined as the frame buffer coordinates (Cx, Cy) of the intersection of the optical axis with the image plane. It is usually used as the origin of the imaging process and appears in the perspective equation. In high accuracy applications, it also serves as the center of radially modeled lens distortion (Zhuang & Roth, 1996). The image center is defined from

where (Xf, Yf) is the computed image coordinates for an arbitrary point and (Cx, Cy) is the computed image coordinates for the center Oi in the image plane.

The Radial Alignment Constraint model holds true and is independent of radial lens distortion when the image center is chosen correctly. Otherwise, a residual exists and, unfortunately, the RAC is highly nonlinear in terms of the image center coordinates.

The method devised to find the image center was to search for the "best" image center as an average of all images in a sequence of robot measurements, using the RAC residuals. This method could actually find the optimal image center in the model, which could be easily checked calculating the overall robot errors after the calibration. The tests with a coordinate milling machine (explained further in the text) also showed that the determination of the image center by this method led to the best measurement accuracy in each sequence of constant camera orientation.

Measurement Accuracy Assessment

The evaluation of the accuracy obtained by the vision measurement system was carried out using a Coordinate Milling Machine (CMM). The camera was fixed on the CMM's table and moved on pre-defined paths as shown in Fig. 2. The calibration board was fixed in front of the camera externally to the CMM, at a position that could allow angles from the camera optical axis to the normal of the calibration plane to be higher than 30o degrees (avoiding ill-conditioned solutions in the RAC model).


The CMM's table motion produced variations in z, x and y axes of the target plate relative to the camera coordinate system. Figure 3 shows the coordinate systems of the camera and calibration board.


There were three different measurement sequences of the 25 camera positions illustrated in Fig. 2 by a cross mark. Each sequence was performed with the calibration board placed at different distances and orientations from the camera. The distances were calculated using the photogrammetric model.

Once all external parameters of the camera (R, Tx, Ty and Tz) are determined for a particular camera pose in the camera coordinate system, it is necessary to establish an inverse transformation to determine the translation vector (XW, YW, ZW) from any point (on the calibration board) represented in the world coordinate system to the camera optic center. This translation vector has to be represented in the x, y and z directions of the world coordinate system. As defined in Eq. (1), R is the rotation matrix that aligns the world coordinate system with the camera coordinate system. From this, it is clear that

where T is the translation vector from the camera optic center to the origin of the world coordinate system (represented in the camera coordinate system) and (xw, yw, zw) is any point of the world coordinate system. This definition agrees with Eq. (1). Multiplying both sides of Eq. (19) by -R-1 yields

where (x, y, z) is the representation of any point (xw, yw, zw) in the camera coordinate system, from Eq. (1). So, the vector (XW, YW, ZW) is the distance from any reference point on the calibration board to the optical center, represented in the world coordinate system.

The orientation of the line from the reference point to the optical center (target line), represented in the world coordinate system can be obtained from the rotation matrix

The calibration of the remaining intrinsic parameters of the camera, ƒx and k, was performed using the first sequence of 25 camera positions, which was chosen to be the closest to the target plate. For each image, values of ƒx and k calculated by the algorithm were recorded. Due to noise and geometric inaccuracies each image yielded different values for ƒx and k. The average values of the constants ƒx and k calculated from the 25 images were 1523 and 7.9 x 10-8 respectively. The standard deviation values for ƒx and k were 3.38 and 2.7 x 10-9 respectively.

The average values for ƒx and k should be kept constant from then on, and considered the "best" ones. To check this assumption, the distance traveled by the camera calculated by the Euclidean norm of (XW, YW, ZW) for each camera position, and the distance read from the CMM display were compared to each other. Errors were calculated to be the difference between the distance traveled by the camera using the two methods. It was observed then that the average value of ƒx did not minimize the errors.

Subsequently, an optimal value for ƒx was searched to minimize the errors explained above, by changing slightly the previous one. The value of k did not show an important influence on the errors when changed slightly. The optimal values of ƒx and k were checked in the same way for the other two measurement sequences at different distances, and showed to produce the best accuracy. The optimal values for ƒx and k where found to be 1566 and 7.9 x 10-8, which were kept constant from then on.

Assessment of 3-D measurement accuracy using a single camera is not straightforward. The method designed here to assess the measurement accuracy was to compare each component XW, YW and ZW corresponding to each camera position, to the same vector calculated assuming an average value for the rotation matrix. This method is justified considering that, since there are no rotations between the camera and the calibration board during the tests, the rotation matrix must be ideally the same for each camera position. Since the vector (XW, YW, ZW) depends on R in Eq. (2) and T in Eq. (3), and as T is calculated from R, all measurement errors would be accounted for in the rotation matrix, apart from the errors due to the ground accuracy of the calibration board. Of course, this assumption is valid only if another method to compare the camera measurements to an external measurement system (in physical units) is used to validate it. That means, the accuracy calculated from the traveled distance validates the accuracy calculated from the rotation matrix. The first method does not consider each error component independently (x, y, z), but serves as a basis to optimize the camera parameters. The second method takes into account the 3-D measurement error, assuming that the "correct" rotation matrix is the average of the ones calculated for each of the 25 positions. Table 1 shows the average, median and standard deviation of the 3-D position error for each component in x, y and z, calculated from the rotation matrix error.

The maximum position errors for each measurement sequence were 0.43, 0.97, and 0.72mm respectively. Figure 4 shows graphically the average, median and standard deviation values of the measurement system position accuracy calculated as explained before, as a function of the average distance from the camera (optical center) to the central point of the calibration board.


Conclusions

Results showed that the RAC model used (with slight modifications) is not very robust, since even for images filling the entire screen and captured at approximately the same distances from the target, the focus length was not constant and showed an average value shifted by approximately 3% from the exact one. This amount of error can produce 3-D measurement errors much larger than acceptable. Practically speaking, the solution for this problem developed here for a set of camera and lens was to use an external measurement system to calibrate the camera, at least once. The measurement accuracy obtained is comparable to the best found in academic literature for this type of system, with median values of accuracy of approximately 1:3,000 when compared to the distance from the target. However, this accuracy was obtained at considerable larger distances and different camera orientations than usual applications for cameras require, making the system suitable for robotic metrology.

  • Albada, G. D.; Visser, A.; Lagerberg, J. M. and Hertzberger, L. O., 1995, "A Portable Measuring System for Robot Calibration", Proceedings of the 28th International Symposium on Automotive Technology and Automation - Dedicated Conference on Mechatronics and Efficient Computer Support for Engineering, Germany, pp. 441-448.
  • Asfahl, C. R., 1992, "Robots and Manufacturing Automation", John Wiley & Sons, Singapore, 2nd edition.
  • Hidalgo, F. and Brunn, P., 1998, "Robot Metrology and Calibration Systems" - A Market Review, Industrial Robot, Vol. 25, No. 1, pp. 42-47.
  • Lenz, R. K. and Tsai, R. Y., 1987, "Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3D Machine Vision Metrology", IN: Proceedings of the IEEE International Conference on Robotics and Automation, Raleigh, NC, pp. 68-75.
  • Prescott, B. & McLean, G. F., 1997, "Line-Based Correction of Radial Lens Distortion", Graphical Models and Image Processing, Vol. 59, No. 1, pp. 39-47.
  • Tsai, R Y., 1987, "A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the Shelf TV Cameras and Lenses", IEEE International Journal of Robotics and Automation, Vol. RA-3, No. 4, pp. 323-344.
  • Weng, J.; Cohen, P. and Herniou, M., 1992, "Camera Calibration with Distortion Models and Accuracy Evaluation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 10, pp. 965-980.
  • Wolf, P. R., 1983, "Elements of Photogrammetry", 2nd Edition, McGraw-Hill, Singapore.
  • Zhuang, H. and Roth, Z. S., 1993, "A Linear Solution to the Kinematic Parameter Identification of Robot Manipulators", IEEE Transactions on Robotics and Automation, Vol. 9, No. 2, pp. 174-185.
  • Zhuang, H. and Roth, Z. S., 1996, "Camera-Aided Robot Calibration", CRC Press, USA.
  • Zhuang, H.; Roth, Z. S; Xu, X. and Wang, K., 1993, "Camera Calibration Issues in Robot Calibration with Eye-on-Hand Configuration", Robotics & Computer Integrated Manufacturing, Vol. 10, No. 6, pp. 401-412.

Publication Dates

  • Publication in this collection
    27 Aug 2003
  • Date of issue
    July 2002
The Brazilian Society of Mechanical Sciences Av. Rio Branco, 124 - 14. Andar, 20040-001 Rio de Janeiro RJ - Brazil, Tel. : (55 21) 2221-0438, Fax.: (55 21) 2509-7128 - Rio de Janeiro - RJ - Brazil
E-mail: abcm@domain.com.br