Vis enkel innførsel

dc.contributor.advisorJohansen, Tor Arne
dc.contributor.authorRodin, Christopher Dahlin
dc.date.accessioned2020-03-06T12:04:59Z
dc.date.available2020-03-06T12:04:59Z
dc.date.issued2019
dc.identifier.isbn978-82-326-4253-3
dc.identifier.issn1503-8181
dc.identifier.urihttp://hdl.handle.net/11250/2645782
dc.description.abstractEnabling technologies such as more powerful and miniaturized embedded computers, sensors, and power systems have rendered small Unmanned Aerial Systems (sUAS) cheap and readily available for industry, consumers and university departments and enterprises of all sizes. Videos and photographs captured by cameras attached to sUAS have become a common sight in video and photo sharing websites, corporate offerings, as well as in scientific journals. While the purpose of the first group of users often is to take beautiful photos, the latter two user groups often use the camera attached to their sUAS as a sensor to either navigate the sUAS, measure certain parameters, or detect objects in the physical world. For these types of missions, either a visible-light camera or a camera in the infrared or other bands can be used, e.g. thermal infrared cameras which often prove to be good at detecting people and animals, as well as oil spills. This thesis focuses on evaluating and improving upon the methods used when the camera system is being used as a tool to geometrically measure the environment and detect objects. Initially in the thesis, image quality degradation caused by external movements and vibrations are discussed. The source of the movements are discussed, along with suggested solutions on how to mitigate the image degradation. This includes both camera and optics design parameters (pixel size, optics or camera sensors with built-in optical image stabilization), as well as active and passive vibration dampening (electric motors, passive dampeners), software image and video stabilization, and aerodynamic design of the camera stabilization platform. For evaluating and improving on the geometrical measurements, an algorithm for producing a highly accurate camera attitude estimate – the orientation of the camera in the geographic coordinate system – is developed. The reason for this focus is the large error an incorrect camera attitude estimate can introduce in geometrical measurements. The algorithm matches a horizon detected in a camera image with a horizon synthetically generated using a Digital Surface Model. Working in close to real-time, the algorithm’s accuracy exceeds that of similar methods while performing global search in the yaw angle – commonly the angle that is most difficult to estimate. Camera images were later acquired in a real-world mission taking place in the Bothnian bay in Northen Europe. A camera was attached to a moored balloon system with the purpose of tracking and estimating the size of ice-floes floating into a marine vessel. The attitude estimation algorithm was used on the camera images with the purpose of estimating the dimensions of the ice-floes, and proved to produce highly accurate shape estimates even at very slant camera angles. The final focus point of the thesis is on object detection and classification, which is a common task for camera systems mounted on sUAS, for example in searchand-rescue missions at the sea. In order to evaluate the camera performance, a mission was performed where various objects were placed at the sea surface. The objects were selected as common objects present in search-and-rescue missions, e.g. a human, pallets, buoys, and a small boat. Images of the objects were taken with both a visible-light camera and a thermal infrared camera. The detectability of the objects was first evaluated using common image processing algorithms like edge detection. This showed the large effect that the scene can have on such tasks, with noise from ocean waves causing difficulties in detecting objects smaller than 3 pixels in the image plane. A study on classification of the objects are finally performed.This uses the the thermal camera image dataset from the aforementioned mission together with a Gaussian Mixture Model for image segmentation, and finally a Convolutional Neural Network for object classification. The conclusion is that object classification is possible with an acceptable accuracy, but that the low pixel count of objects in the image plane is a limiting factor.
dc.language.isoengnb_NO
dc.publisherNTNUnb_NO
dc.relation.ispartofseriesDoctoral theses at NTNU;2019:328
dc.titleApplications of High-Precision Optical Imaging Systems for Small Unmanned Aerial Systems in Maritime Environmentsnb_NO
dc.typeDoctoral thesisnb_NO
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550::Technical cybernetics: 553nb_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel