Show simple item record

dc.contributor.advisorAamodt, Agnar
dc.contributor.advisorMisimi, Ekrem
dc.contributor.authorOlofsson, Alexander Martin
dc.date.accessioned2018-01-12T15:01:14Z
dc.date.available2018-01-12T15:01:14Z
dc.date.created2017-07-12
dc.date.issued2017
dc.identifierntnudaim:16527
dc.identifier.urihttp://hdl.handle.net/11250/2477324
dc.description.abstractThis thesis outlines work done in generating models used to control a robotic arm and tactile sensitive gripper for successfully grasping compliant food objects, using colour and depth images as input. The compliant food object used were salads, as they are both easy to work with and require delicate handling. The focus is to determine if it is possible to achieve significant results using just images as input, as well as comparing two different approaches. Two different approaches were utilised when generating the models. The first model, Support Vector Regression, required features to be extracted from the images before being input into the model. The second model, Convolutional Neural Networks, accepts an image as input. Both models were taught how to respond to various salad positioning through learning-from-demonstration, using a dataset of successful grasps previously gathered.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi, Kunstig intelligens
dc.titleTactile-sensitive robotic grasping of food compliant objects with deep learning as a learning policy
dc.typeMaster thesis


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record