A truism of automated assembly is that a robot is only as good as the fixture holding the parts. A robot can return to the same positions time and again. But, if the location and orientation of the parts deviate from what the robot has been taught, it may not perform reliably.

There are two ways around this problem. One is to increase the precision of the fixture-and accept a corresponding increase in cost and decrease in flexibility. The other is to give the robot a guiding hand with machine vision.

Vision guidance can be used with any type of robot, from the smallest SCARAs to the largest six-axis robots. Vision guidance is necessary if the robot must pick parts that are imprecisely fixtured or not fixtured at all. Vision is also necessary if the robot must perform a particularly precise operation, or if it must track parts on a moving conveyor, says Manish Shelat, sales and business development manager at Teledyne DALSA.

No special camera is required. Most standard resolution, area-array cameras will do the job. "You don't need anything fancy, like a smart camera," says Shelat. "However, it is important that the camera have an asynchronous reset feature. This allows the robot controller to tell the camera when to take a picture."

In some cases, a camera may not even be useful. For example, if the robot is in an environment with spraying water, a camera is not going to do much good. In that case, an ultrasonic distance sensor might better enable the robot to draw a bead on the target.

A vision camera can be mounted to the robot's arm or to a stationary support above the workcell. In all but the smallest robots, a vision system-including camera, optics and even lighting-should have minimal impact on a robot's payload.

"If cycle time is an issue and the field of view will remain constant, consider mounting the camera to a stationary support above the workcell," says John C. Clark, national sales manager at EPSON Robots. "That way, after the robot has picked up a part and moved on, the vision system can be looking for the next part. The two systems can work in parallel. If the camera is attached to the robot, it can only start looking for parts when the robot gets back."

The number of cameras needed to guide the robot depends on the application. In a common multicamera setup, one camera is mounted above the cell looking down, and another is near the work surface looking up. The top camera finds the part, while the bottom one looks for a distinguishing characteristic needed to orient the part on an assembly. A third camera could be added to view the part from the side, checking for flaws, such as bent pins.

As with any vision application, lighting is critical. Diffuse on-axis lighting is necessary to locate parts with reflective surfaces. Off-axis lighting, which creates shadows, is required to find stacked parts or parts with uneven surfaces. Ambient lighting is rarely sufficient for robot guidance and may even be detrimental.

The control system is also important. Robot suppliers advocate systems with tightly integrated controls for both vision and motion. If the motion system and vision system are different entities, there will always be some communication overhead. When the vision system gets the X-Y-Z location of a point, the robot has to convert that data to its own frame of reference. When the vision system and the motion controller are integrated, there's a single frame of reference for both.

One of the trade-offs in using machine vision for robotic guidance is a slight increase in cycle time. The vision system needs time to acquire an image, process it, and translate the data into a motion command. Compared with a robot that simply moves to a series of predetermined locations, a vision-guided robot may be slightly slower. How much slower depends on the application. "If you're looking for a small part in a large field of view, it's going to take some time to find it," Clark points out. "It's milliseconds, but over days or months, those milliseconds add up."