Vis enkel innførsel

dc.contributor.advisorOnshus, Tor Engebret
dc.contributor.authorIversen, Bendik Bjørndal
dc.date.accessioned2018-08-30T14:01:53Z
dc.date.available2018-08-30T14:01:53Z
dc.date.created2018-06-04
dc.date.issued2018
dc.identifierntnudaim:18595
dc.identifier.urihttp://hdl.handle.net/11250/2560149
dc.description.abstractThe LEGO-robot project already consists of several autonomous ground robots with onboard IR-sensors able to take measurements of the walls in an indoor maze while communicating with a server running a Java application, which creates a 2D map of the maze based on IR-data received from the robots. This master's thesis is a part of a project where the final goal is to have a drone assist the robots in the LEGO-robot project in the mapping of the maze. An application for detecting wall segments of a maze, by analyzing images captured of the maze from above using computer vision algorithms, have already been developed on a Raspberry Pi Model 3 B. Later, the system was further developed such that the Raspberry Pi was able to communicate with the server in the same way as the other robots. The next step, which is one of the main challenges regarding the imagined drone system, is how to estimate the pose of the drone. Since the drone is supposed to operate indoors, GPS cannot be used. Visual odometry is a promising approach to obtain the pose estimates. The development of a visual odometry algorithm has therefore been the main goal of this master's thesis. The algorithm uses stereo vision and advanced computer vision algorithms to analyze images captured by a small stereo camera and estimates the pose of the camera relative to the initial pose, where the origin is at the center of the left lens. It quickly became obvious that the processing power of a Raspberry Pi was not sufficient for running a visual odometry algorithm and computing the pose at a satisfactory frequency. A significantly more powerful system, the Nvidia Jetson TX1, was therefore purchased and used throughout the rest of the project. The visual odometry algorithm has been implemented on the TX1 and it is able to compute pose estimates at a rate of 5-10 Hz when taking images with a resolution of 320x240. The communication module developed earlier has also been implemented on the TX1, and it is able to connect to the server and in addition, send the pose estimates produced by the visual odometry algorithm continually to the server. The TX1 also appears as a robot in the map created by the Java application when sending the pose estimates to the server. The mapping application hos not been implemented on the TX1, however. The pose estimates produced by the current version of the visual odometry algorithm is not accurate enough to be successfully used in a drone system. Several suggestions of improvements are presented and discussed. This thesis does, however, show that pose estimation using stereo visual odometry is possible on hardware small enough to be placed on a drone, and the envisioned system where a drone assists in the mapping of an indoor maze in the LEGO-robot project is one step closer to realization.
dc.languageeng
dc.publisherNTNU
dc.subjectKybernetikk og robotikk, Autonome systemer
dc.titleStereo Visual Odometry for Indoor Positioning
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel