Related Glossary Terms
Way of displaying real-world objects in a natural way by showing depth, height and width. This system uses the X, Y and Z axes.
- degrees of freedom
degrees of freedom
Number of axes along which a robot, and thus the object it is holding, can be manipulated. Most robots are capable of maneuvering along the three basic Cartesian axes (X, Y, Z). More sophisticated models may move in six or more axes. See axis.
Workpiece is held in a chuck, mounted on a face plate or secured between centers and rotated while a cutting tool, normally a single-point tool, is fed into it along its periphery or across its end or face. Takes the form of straight turning (cutting along the periphery of the workpiece); taper turning (creating a taper); step turning (turning different-size diameters on the same work); chamfering (beveling an edge or shoulder); facing (cutting on an end); turning threads (usually external but can be internal); roughing (high-volume metal removal); and finishing (final light cuts). Performed on lathes, turning centers, chucking machines, automatic screw machines and similar machines.
It takes knowledge and skill to get a robot to do exactly what you want it to do in a manufacturing environment--and those are hard to come by in this skills-gap era. A new interface designed by Georgia Institute of Technology researchers is simpler and more efficient than most interfaces, and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.
With a traditional interface, the operator uses a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.
With the Georgia Tech interface, “instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we’ve shortened the process to just two clicks,” stated Sonia Chernova, assistant professor at the university’s School of Interactive Computing who advised the research effort.
The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.
The point-and-click format doesn’t include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot’s perception algorithm analyzes the object’s 3-D surface geometry to determine where the gripper should be placed. It’s similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.
“The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can’t see, such as the back of a bottle,” stated Chernova. “Our brains do this on their own — we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot’s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.”
By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.
The point-and-click interface was designed to improve ease-of-use of operations for users of home-assistance robots, in space exploration and search-and-rescue operations. It looks like it could also be a beneficial approach for programming robots for repetitive manufacturing tasks as well.
Source: Georgia Institute of Technology