Human Robot Interaction

660 Words3 Pages
Human Robot Interaction Hand Posture Segmentation, Recognition and Application for Human-Robot Interaction Human hand gestures provide the most important mean for non-verbal interaction among people. They range from simple manipulative gestures that are used to point at and move objects around to more complex communicative ones that express our feelings and allow us to communicate with others. Migrating the natural means that human employ to communicate with each other such as gestures, into Human-Computer Interaction (HCI) has been a long-term attempt. Numerous approaches have been applied to interpret hand gestures for HCI. In those approaches, two main categories of hand gesture models are used. The first group of models is based on appearance of the hand in the visual images. Gestures are modeled by relating the appearance of any gesture to that of the set of predefined template gestures Appearance-based approaches are simple and easy to implement in real time, but their application is limited to the recognition of a finite amount of hand gestures and they are mostly applicable to the communicative gestures. The second group uses 3D hand models. 3D hand models offer a way to model hand gestures more elaborately. They are well suitable for modeling of both manipulative and communicative gestures. Several techniques have been developed in order to capture 3D hand gestures. Among those, glove-based devices are used to directly measure joint angles and spatial positions of the hand. Unfortunately, such devices remain insufficiently precise, too expensive and cumbersome, preventing the user from executing natural movements and interacting with the computer intuitively and efficiently. The awkwardness in using gloves and other devices can be overcome by using vision-based interaction techniques. These approaches suggest using a set of video cameras and computer

More about Human Robot Interaction

Open Document