Vous êtes sur la page 1sur 4

Feature

A new integrated robot vision system from FANUC Robotics


Christine Connolly
Stalactite Technologies Ltd, Wakeeld, UK
Abstract Purpose To report on developments in robotic vision by a particular robot manufacturer. Design/methodology/approach Examines FANUC Robotics philosophy and history of integrated vision, describes its latest offering, and looks at the specication of the new robot controller. Findings The new robot controller incorporates image processing hardware and software, including calibration procedures. The intelligent robot responds to changes in its surroundings, eliminating the need for jigs and part-alignment devices and broadening its capabilities. Originality/value Presents the intelligent robot as a practical tool in factory automation. Keywords Robotics, Order picking, Pattern recognition Paper type Technical paper

Introduction
In April 2006, FANUC Robotics introduced a new robot controller called the R-J3iC (Figure 1). It is an advanced controller that supports all FANUC intelligent robots. It has a exible open architecture allowing users to plug in a variety of additional components such as force sensors and vision systems, and it incorporates two ethernet ports with integrated software to support the networking of multiple robots into production management systems. With this new controller, FANUC Robotics aims to reduce the complexity of robot vision. The i RVision hardware and software are a standard part of the R-J3iC controller, enabling the user to plug a camera directly into the main CPU board and needing no additional computer for camera control or image processing. The hardware provides easy integration, supporting up to four cameras, and the on-board software tools enable the user to set the system to work quickly, calibrating the camera against the robot, teaching it to recognize the parts to be handled, processing the images and adjusting the robot motion in response to variable part positioning.

FANUC Robotics approach to robot vision


FANUC Robotics intelligent robot concept started with the building of a prototype in the early 1990s, which it steadily tested and improved, leading to productionisation in 1999. The concept involves the incorporation of tactile and vision sensing,
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0143-991X.htm

Industrial Robot: An International Journal 34/2 (2007) 103 106 q Emerald Group Publishing Limited [ISSN 0143-991X] [DOI 10.1108/01439910710727423]

along with the processing power and software tools to make sense of and respond to the incoming sensory data. This allows the robot to respond to changes in its surroundings, eliminating the need for peripheral systems such as jigs and part-handling devices. It is no longer necessary to present components one by one in a pre-determined alignment, so instead of using a bowl feeder to present the parts in the correct orientation and position, the user can present the parts randomly. FANUC Robotics has been a supplier of robot vision systems since 1984, and in contrast to other robot manufacturers, has maintained its commitment to the value of vision in robotics, installing thousands of vision products worldwide. FANUC Robotics philosophy is that tactile and vision sensing are integral to the robots operation and should be close-coupled in the robot controller to ensure successful hand-eye coordination. This is particularly important in the developing eld of visual servoing. The companys primary focus has been two- and threedimensional robot guidance, but it has also installed vision systems for inspection and error proong applications. FANUC Robotics vision software has developed over many years, and has always used geometry-based feature nding, rather than taking the normalised correlation approach widely used in general-purpose image processing libraries. Robots handle large geometric parts, and it is more logical to search for geometric shapes represented by large numbers of pixels, rather than study the correlation of individual pixels to the trained template. In 1999, FANUC Robotics introduced its V-500iA vision sensor, a low-cost, high-power PC-based system in which the geometric algorithms were optimised for the Intel processor instruction set. The software had the ability to recognize a part when rotated, magnied or reduced in size, or partly concealed, and provided the user with a tool that was easy to teach and robust to the practicalities of robotic part handling. The sensor also coped with variations in lighting intensity affecting the whole or part of the eld of view. It provided a 2D robot guidance tool for part location and presence verication, and its applications included 103

A new integrated robot vision system from FANUC Robotics Christine Connolly

Industrial Robot: An International Journal Volume 34 Number 2 2007 103 106

Figure 1 FANUC Robotics R-J3iC controller and intelligent teach pendant support its range of intelligent robots

projectors cast crossing lines on to a part, and the shape of the lines as perceived by the imaging sensor enables the processor to calculate the 3D shape of the part. The system can operate in hybrid mode, using conventional 2D vision to nd the general location of the part, and then switching to 3D measurement as the robot approaches the part, carrying the 3D laser vision sensor on the robot. This is particularly useful in bin-picking of cast parts, automotive sheet materials, printed circuit boards, plastic covers and electrical parts (Figure 3).

Capabilities of i RVision
FANUC Robotics new i RVision system is its most recent robot vision product and takes advantage of the advances in machine vision technology and the processing capability of its new R-J3iC controller, which controls the movements of the robot and processes the sensory data. It has a similar functionality to the V-500iA system, making available the vision tools and application processes that have become standard in that earlier vision system. However, the software has been completely re-written to run efciently on the R-J3iC controller platform. To take advantage of the built-in i RVision system on the R-J3iC controller, the robot user plugs in a standard Sony depalletisation and picking of randomly presented parts on a conveyor (Figure 2). The V-500iA vision system produced images of 640 480 pixel resolution, with 256 greyscale values per pixel. It supported up to four cameras, with a separate frame grabber board for each, and used a dedicated Pentium-based PC to process the images and a special teach pendant to control the vision processing. In addition to its 2D vision capability, the V-500iA included a laser vision system using structured light. Two laser line Figure 2 Intelligent robots use vision information to correct to their positioning when picking up individual components from a conveyor Figure 3 Lasers project crossing lines to detect the 3D shape of a part in this in bin-picking application

104

A new integrated robot vision system from FANUC Robotics Christine Connolly

Industrial Robot: An International Journal Volume 34 Number 2 2007 103 106

XC56 camera using a special cable provided by FANUC Robotics. The system provides two-dimensional imaging to indicate the presence or absence of a part, and guide the robot appropriately. This can be upgraded to a 3D vision system. To set up and train the i RVision system, the user connects a PC via the ethernet connection, and uses Microsofts internet Explorer browser, calibrating the camera to the robot and training the vision system to recognise the appropriate parts via a web page interface. The user species an area within the part, or the whole of the part itself, to act as a geometric pattern for identication. The image-processing algorithm searches the image for the pattern, and determines its exact position and orientation so that it can compute an offset for the robot to handle the part (Figure 4). FANUC Robotics geometric pattern matching algorithm incorporates some unique capabilities specic to robotics. For example, the user can select emphasis areas within the image and train particular search parameters for these features to rene the pattern matching procedure and detect the orientation of the object more precisely, so that the robot can interact with it successfully. The system supports both arm-mounted and xed cameras, and can handle multiple cameras giving different viewing angles of the same part. An essential component of the vision system is its calibration capability. Several forms of calibration are available, depending on the application and the particular customer requirements. The simplest and quickest is a two-point calibration technique. The user identies two points within the eld of view and at the same distance from the camera, and types in the actual distance between the two points. This enables the robot to convert image distances into real-world distances and

thus calibrate the vision-viewing frame to the robot coordinate system. Instead of measuring the distance between the points, the user can employ the robot as a pointing device, and allow the robot controller to calculate the real-world distance. A more standard technique uses a calibration grid, consisting of two circles with known spacing and an L-shaped marking to dene the x and y axes. This calibrates the viewing frame of the vision camera to the robot system whilst providing corrections for perspective errors, including the multiple z-planes that are inherent in optical lensing systems. A third Automatic Calibration procedure is available, in which the robot manoeuvres the camera or part into different positions so that a set of positions can be gathered and used for the calibration of the camera and robot. This automatic calibration is very useful for applications in which the camera may be moved by vibrations or contact, and allows the user to run the procedure quickly and easily from a teach pendant program.

The R-J3iC controller


Robots controlled by the R-J3iC range from the 6-axis M-6iB designed for rapid handling of small parts of up to 6 kg weight, to the 6-axis articulated M-900iA/600 heavy payload robot capable of manipulating building construction frames of up to 700 kg. The R-J3iC controller manages up to 40 axes, enough to control four robot arms, or four auxiliary axes groups such as arc welding positioners, servo hands or grippers. It has a new vibration control function enabling robots to accelerate and decelerate very quickly and reducing cycle times by about 5 per cent. In spot welding, it optimises the servo-gun and

Figure 4 This screen-shot shows the geometric pattern matching tool nding the location of pre-trained parts randomly positioned

105

A new integrated robot vision system from FANUC Robotics Christine Connolly

Industrial Robot: An International Journal Volume 34 Number 2 2007 103 106

robot motion and reduces the cycle time of the R-2000i B robot by 15 per cent. The controller automatically calculates the deection of the robot arm due to gravity and acceleration, improving the absolute accuracy of control. FANUC Robotics has incorporated a lot of connectivity into the R-J3iC controller to facilitate the plug-in option philosophy and provide exibility for customers to congure the robot system as required. The 100 BaseTX ethernet integrated into the controller provides TCP/IP connectivity and addressing, and three serial ports enable the user to connect printers, keyboards and sensors. FANUC I/O links allow the connection of CNCs and PLCs, arc welding and sealing equipment. There is the option of a Fieldbus connector, such as ProBUS, InterBUS or DeviceNET. There is a USB interface on the front panel of the R-J3iC controller, and a PCMCIA interface inside the cabinet, for convenient backing-up and restoring of programs and data. The intelligent teach pendant i-Pendant comes as standard, and incorporates a 6.4 in. colour LCD display that can show the robot instructions or an image captured by the vision system. The pendant also allows the user to access computers and servers on the factory network, for example displaying the manual for the robot, or parts drawings of the components to be handled. The user can also view the images from the camera from any computer connected to the network.

Figure 5 An intelligent robot aligns food items for packaging

Conclusions
Consistent development has produced a powerful vision system, and the new facility to plug a camera into the robot controller takes FANUC Robotics to its long-term goal of simple and effective robotic hand-eye coordination. This enables robots to tackle tasks such as picking randomly presented products on a conveyor and packaging them (Figure 5). It is also causing the automotive industry to reconsider robotic guidance in place of some of its labour intensive operations.

Contacts
www.fanucrobotics.com: FANUC Robotics America, Inc., Rochester Hills, Michigan, USA.

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

106

Vous aimerez peut-être aussi