Integrating graphical face software with a social robot and detecting human interest
Abstract
This thesis explains how a digital graphical face was integrated with the software components of the Cyborg using Robot Operating System (ROS), and how the face could improve the Cyborg's ability to socially interact with Humans.
The social abilities of the Cyborg was improved by allowing the face to be displayed on the Cyborg, letting other Cyborg modules use the face's ability to turn text into speech (TTS) to communicate in a more humanlike fashion, and use facial expressions to communicate the Cyborg's internal state the same way humans do.
The face and Cyborg was integrated by setting up a TCP server and client to let them pass information and commands to each other, and a solution was found to display the face on an Ipad which can be mounted on the Cyborg.
A simple interest detector was implemented by using the skeleton tracking feature of a Kinect mounted on the Cyborg to find the spatial relationship of people and the Cyborg to guess if they intend to interact with the Cyborg or not.