Remote Operations using Oculus Rift, Leap Motion and Microsoft Kinect
Master thesis
Permanent lenke
http://hdl.handle.net/11250/2403574Utgivelsesdato
2016Metadata
Vis full innførselSamlinger
Sammendrag
This thesis explores the usage of virtual reality technology as means of observing and interacting with our physical world through a remotely located robot. Proof of concept for a novel approach of performing remote operations is developed during the course of this project, aiming to give the operator an immersive feeling of presence at the robots location.A prototype system has been developed which enables an operator to remotely control arobot, consisting of a manipulator mounted on a drivable platform, by using the position ofhis left hand in combination with a joystick. Left hand position is acquired by using a smallinfrared-camera device mounted in front of a head mounted display, to be worn by the operator.
Observation from the robot s perspective is achieved by streaming video from on-boardcameras to the headmounted display. The cameras follow the head direction of the operator bylinking a pan tilt unit with tracking sensors of the display. In addition to video streams, 3D models representing various attributes of the system is rendered to the display by using OpenGL, providing the operator with vital information and feedback during operation.
The system is divided into a client and server application, which communicate wirelessthrough a network connection. The server is installed on the robot, exposing its on-board devices, while the client is installed on a desktopmachine to be used by an operator. A car battery is used as a power source for the robot in order to make it completely wireless.The joystick is designated as the master controller of the system, and is intended to be maneuvered by the right hand of the operator. Its buttons are used to select between modes ofmanipulator operation and toggling of various features, while the stick itself may be moved in order to drive the robot. Two modes of manipulator operation have been implemented, termedDirect Mode and Record Mode. Both of which base manipulator movement on the operatorsleft hand position.
Direct Mode enables the operator to control the manipulator in real-time. Any left handmovement performed in front of the head mounted display, result in corresponding movementof the manipulator. Hand positions are continually registered, and by using inverse kinematics the corresponding joint values are derived and used as target angles for the manipulator. In addition, a virtual representation of the manipulator is rendered to the display providing instant feedback to the operator on how his hand movements translates to manipulator movement.
Record Mode enables the operator to first register a sequence of positions using his lefthand, before executing the entire sequence on the manipulator. A consumer grade 3D cameramounted in front of the manipulator provides a point cloud representation of the manipulatorswork area. During RecordMode, the point cloud is rendered into the head mounted displaytogether with a virtual representation of the manipulatorwhich follows themovement of the operator s left hand in real-time. The virtual manipulator and point cloud are positioned relative to each other, such that it looks as if the virtualmanipulator operates in the physical environment.
While maneuvering the virtual robot in the point cloud, its positions may be recorded into asequence. The sequence can then be executed to make the physical manipulator traverse thesame registered positions in the real-world, interacting with its work area as instructed. Hence, it enables the operator to plan and record his physical manipulator action in a virtual representation.