Vis enkel innførsel

dc.contributor.advisorEgeland, Olav
dc.contributor.authorTysse, Geir Ole
dc.contributor.authorMorland, Martin
dc.date.accessioned2015-10-05T15:14:22Z
dc.date.available2015-10-05T15:14:22Z
dc.date.created2015-06-11
dc.date.issued2015
dc.identifierntnudaim:13460
dc.identifier.urihttp://hdl.handle.net/11250/2351223
dc.description.abstractIn this thesis we explain how two parts from the automotive industry can be assembled using an RGB-D camera providing colour (RGB) and depth (D) as a combined RGB-D image, and two KUKA Agilus manipulators with six revolute joints. The RGB-D camera is presented and calibrated, and AR-tags were used to efficiently transform the coordinate frame of the RGB-D camera to a table with the parts. In relation to this, it is shown how geometric algebra can be used to make an orthonormal coordinate frame on the table. Several different methods are presented and evaluated for obtaining the position of the parts, based on data from the RGB-D camera. This includes, a C++ program for segmentation of the scene based on the orientation of the surface normals in the point cloud, and a program based on the geometry of the parts. A method for determining the orientation of the parts, using SIFT (Scale Invariant Feature Transform) and homography based on data from a 2D camera is proposed. We present ROS (Robot Operating System) and how it interacts with the RGB-D camera, 2D camera and the robots. In addition, the forward kinematics of the robots are explained and implemented in ROS. The experiment was performed as follows: An RGB-D camera positioned at a random place with the AR-tags in the field of view. An algorithm automatically determines the pose of the RGB-D camera with respect to a known coordinate frame. The RGB-D camera was used to measure the coordinates of the parts with respect to the known coordinate frame. A robot with a 2D camera connected to the end-effector, moved to a pose where the 2D camera was vertically above the parts, one at a time. Data from the 2D camera were used in an algorithm for estimating the orientation of the parts. A robot with a gripper connected to the end-effector used the position and orientation of the parts to assemble them. The results obtained were adequate to perform a simplified assembly task for the parts which require an accuracy of 4 mm. Two demonstration videos have been made. One video demonstrates the estimation of the orientation of the parts. The other video demonstrates how the position of the parts are estimated and how the robot performs the simplified assembly task.
dc.languageeng
dc.publisherNTNU
dc.subjectUndervannsteknologi, Undervannsteknologi - Drift og vedlikehold
dc.titleRobotic Assembly Using a 3D Camera
dc.typeMaster thesis
dc.source.pagenumber214


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel