A customer wanted to reduce labour costs and increase throughput by automating its manufacturing processes. To find a solution, Fanuc Robotics America, needed a way to unload randomly-placed transmission gears from a plastic bin.
In addition to unloading the gears onto an exit conveyor, the robot needed to transfer a plastic board that divides each gear layer. To meet the throughput requirements of the work cell, the vision-guided robot had to reach an average cycle time of 4,5 seconds per part, including the removal of the plastic divider sheets.
To address these challenges, Fanuc Robotics designed a work cell incorporating its M-16iB robot and integrated vision system for robot guidance. The vision system takes a snapshot of the gears and coordinates subsequent manufacturing steps. The vision hardware includes a video camera, red LED ring light, high flex cables, power supply and a Matrox 4sightII computer. The Matrox computer includes an Orion PC104+ Frame Grabber, which is used to capture visual input. A generic flat panel monitor, keyboard and mouse complete the user-interface to the vision system. Other major components include a custom designed and built end-of-arm-tool (EOAT), pneumatics package, robot riser, two-part rack locators, divider sheet storage rack, interface panel and safety fence.
Robots are trained using Fanuc Robotics' Windows-based vision program. Parts are trained to provide a reference to compare images taken during automatic operations. The trained parts are found by vision using the company's geometric pattern matching algorithm. The part is trained at the same height that the robot searches for parts. This provides the best opportunity to find parts, since the size and contrast will be most accurate.
The information from visual calculations that is provided to the robot is the X and Y offset of the centre of the gear with respect to the trained nominal position. Both parts are symmetrical, and radial orientation is not required. The vision system interfaces directly to the robot via Ethernet through the Fanuc Robotics' robot server.
The key parameters are size and contrast, which are combined to create an overall score that is used to determine whether a part has been found. Utilising the size characteristic, the vision system can determine if the tested part is on the current layer or one layer lower (a condition that exists along bin walls). Contrast is used to differentiate between an actual part and an oil ring from a part that has been removed. During testing, the vision system identified phantom parts that were caused by oil rings on the plastic board. The problem was solved utilising the contrast locator parameter.
The vision system also located gears that were one level lower than the robot was unloading. The problem was resolved using the 'size' locator parameter to ignore parts that were not at least 95% of the trained part size.
After the system was installed, the customer requested that part orientation detection be added. Any gears that enter the machining centre upside down cause damage and downtime to the tool. Using a clockwise pointing part feature, the vision program was able to identify upside down parts and load them into a reject chute.
"The system has proven to be a success. The combination of robotic reliability, and the latest advances in visual recognition technology have met all customer requirements," said Richard Meyer, Vision System Designs. "After four months of operation the system increased production throughput by five percent, and decreased labour costs by 30% with a system uptime above 90%."
© Technews Publishing (Pty) Ltd | All Rights Reserved