Tactile-sensitive robotic grasping of food compliant objects with deep learning as a learning policy
Abstract
This thesis outlines work done in generating models used to control a robotic arm and tactile sensitive gripper for successfully grasping compliant food objects, using colour and depth images as input. The compliant food object used were salads, as they are both easy to work with and require delicate handling. The focus is to determine if it is possible to achieve significant results using just images as input, as well as comparing two different approaches.
Two different approaches were utilised when generating the models. The first model, Support Vector Regression, required features to be extracted from the images before being input into the model. The second model, Convolutional Neural Networks, accepts an image as input. Both models were taught how to respond to various salad positioning through learning-from-demonstration, using a dataset of successful grasps previously gathered.