User interaction is one of the most basic but also critical components for a three-dimensional virtual environment. The interaction tasks can be classified into four main categories: object manipulation, selection, system control, scene navigation. When we define gestures as specific behaviors involving meanings and intentions, almost all the actions/movements that the user performs the interaction tasks can be considered as the gestures.
|Communication: User expresses visually his/her idea in front of the screen by moving the hand, like drawing on paper.||Manipulation: User interacts with a 3D virtual model by using non-instumented hand gestures.|
Most of the interaction tasks can be supported by 6DoF manipulation and selection input. But the system control task can hardly be expressed through such direct spatial interaction because of its linguistic/communicative property. Thus it had been implemented with additional interaction methods to represent the system commands and to control them, e.g. voice commands, physical/virtual tools and graphical menu system. We take notice of hand gesture interaction, which effectively supports both direct manipulation and communication work. With its factorized property the gestures can allow users to express their idea and intention while directly manipulating the objects. In order to reduce users cognitive load, we chose some alphabet letters to represent the system commands respectively and symbolized them as dynamic gestures to communicate with the system.
|Symbolic gestures: For example, S drawn by hand movement
changes the sharpness of object.
Gesture W renders the object in wireframe mode.
How to model hand postures/gestures is basically determined by the application scenario within the HCI context. Since our chosen application field is shape modeling, the mapping from gestures to interaction modes should mimic the way how hands are used for sculpting and manipulating objects in the real world. Our system provides five hand gestures (postures) for interacting with virtual objects and virtual cursor.
Interaction with a virtual cursor: Release the pinching states.
Interaction with virtual objects: Pinch the thumb and another finger after pointing to a target object. The selected objects are manipulated in the same way with the hand motions.
|Gesture interaction and object modeling: pause, point(object), rotate, translate(z-direction), point(objects vertex), translate(objects vertex), and rotate.|
Since our main premise for the interface devices is to be as minimalistc as possible on the user requirement, the markers for a hand are designed like a thimble and placed on just four fingertips (thumb, index, middle and little finger). These compact interfaces can allow the users comfortable and natural motion.