| Customize Help

Camera calibration - overview



MIL's Camera Calibration module (Mcal...()) allows you to map pixel coordinates to real-world coordinates. Camera calibration relates pixels to real locations and distances. For example, you could create the following relationship between a pixel and its size and location in the real world.

This mapping can be used to get results from other MIL modules in real-world units or to input information to some modules in real-world units. The mapping can also be used to physically correct an image's distortions.

By getting results in real-world units, you automatically compensate for any distortions in an image. Therefore, you can get accurate results despite an image's distortions.

Defining the pixel-to-world mapping is known as camera calibration. A camera calibration context is used to hold the defined mapping, as well as certain control settings.

Once you have created your camera calibration context, you can use it to:

  • Transform pixel coordinates or results to their real-world equivalents.

  • Physically correct an image.

  • Automatically get results from applicable MIL processing and analysis modules in real-world units.

  • Input values to applicable MIL analysis modules in real-world units.

Camera calibration maps pixels of images to one plane in the real world. Using the Camera Calibration module only is not enough to obtain data about the depth of different points in your images. To retrieve depth information from your two-dimensional images, you must perform a 3D Analysis.

Types of distortions

You can use calibration if you have one or more of the following types of distortion:

  • Non-unity aspect ratio distortion. Present when the X- and Y-axes have two different scale factors. This is evident, for example, if you know that the object in your image should be round and it appears as an ellipse. This type of distortion is often a side effect of the sampling rate used by some older frame grabbers.

  • Rotation distortion. Present when the camera is perpendicular to the object grabbed in the image, but not aligned with the object's axes.

  • Perspective distortion. Present when the camera is not perpendicular to the object grabbed in the image. Objects that are further away from the camera appear proportionally smaller than the same size objects closer to the camera.

  • Other spatial distortions. Complex distortions, such as pin cushion and barrel-type distortions, fall in this category. These distortions can be compensated for by using a large number of small sections in the mapping function. If the number of sections used is big enough and the corresponding area covered in each is small enough, the mapping in each area can be approximated with a linear interpolation function.

The following animation gives examples of the different types of distortion:

Camera calibration mechanism

To perform the calibration, the module uses calibration points and a camera calibration mode, except in uniform camera calibration mode, which assumes only combinations of linear distortions. Calibration points are points in the image with known positions in the world. The module uses these points with the specified calibration mode to determine the world coordinates of all other points in the image. The calibration points can be explicitly specified or can be automatically calculated from an image of a grid and the world description of the grid. You can either use two-dimensional (2D) or three-dimensional (3D) camera calibration modes depending on the type of your imaging distortions and camera setup.

Two 3D-based camera calibration modes are available in MIL. The first is based on the technique developed by Roger Y. Tsai and the second is used for robotics camera setup that has the camera attached on a robotic arm. The position and orientation of the robot's arm are returned by the robot's controller and used to calibrate the camera's position on the robotic arm. Both 3D-based camera calibration modes support full 3D movement of your camera. However, all results are returned in a 2D coordinate system. This means that all points in an image are assumed to be on the same plane even if the plane is slanted.