Decisions Made

Calibration/Tracking

 No major notification of state change
   Subtle
   Blue user on detect
   Skeleton on tracking
 Automatic
 Resetting tracking
 Automaticly tracks first user
Retargetting
 Doesn't use confidence values
 Retargetting to Troy's joints always uses the same orientation for elbow joint.
   Makes moving between retargetted joints natural and fluid
 Hands adjusted using some arbitrary constants when in front of the body
Visualization
 Shows field
   Blue user
   Skeleton tracing
Wiimote
 Rumble on success/unknown

While designing and building the Kinect interface for Qt, several design decisions were made with regards to the usability of the system. This document will attempt to explain what decisions were made, and why the current choice has remained.

Tracking Calibration

Current System

Calibration is done almost entirely automatically. All the user needs to do is enter the frame, and assuming there is an existing calibration file to load, the system will automatically begin tracking the first user that is detected.

Direct feedback from the targetting system is provided by the depth image. When a user is detected, the part of depth image that is covered by that user is shaded blue to show who (or what) is currently targetted. As soon as tracking is calibrated for the user, a green skeleton showing the joints being captured by the system is drawn on the depth image. As these are the three main states the system can have (no user, following user, tracking user), this can both help the user verify that the system is working, and also indicate when problems arise (lost or frozen tracking usually).

Occasionally, for the system to detect the user, they need to move around within the view area. If no calibration file exists, they need to make a 'Y' pose so that they system can generate the calibration data. However, once created, the calibration file persists across sessions.

Another improvement to the targetting system was the ability to reset the calibration. In testing, we found that due to either unusual circumstances, or deliberate attempts to break the system, the calibration could be broken and/or frozen. Also, when multiple users were present, the system would occasionally calibrate to the wrong user. As the causes of these problems lie within the tracking systems of both the Kinect, and OpenNI, the route we chose to get around these problems was to simply reinitialize the targetting system. This method was chosen because it was both simple to execute, and solves the problem at the lowest level we could.

Previous Systems

In previous iterations, this calibration was not automatic, and required the 'Y' pose on each run. Very quickly, this became tedious, as it was both difficult for the system to detect it and difficult for the user to know whether it was working or not.

The original thoughts behind this non-automatic design was that this way information could be presented to the user to walk them through the steps required to use the Kinect. Each low-level state was indirectly presented to the user, telling them what they needed to do to initialize tracking.

Retargetting

Current System

The current method used for retargetting is based on directly mapping the angles of the human user's joints onto the angles of Troy's joints. This system was chosen rather than using an inverse kinematics system for the following reasons:

  • The calculations are simple and can be performed in constant time.
  • The position and shape of Troy's movements closely matches the movements made by the user, even if there is a position that the inverse kinematics system would detect as more optimal for the elbow and hand
  • No mapping needs to be made from real space to Troy space
    • This means no baked in estimates that might not always hold from one person to another
    • If an arm is extended past the edge of the Kinect's vision, as the angles are still the same, the targetting will still hold
  • The Kinect system is not perfect, and the distances between detected points is not always constant
    • This would complicate mapping each position from real space to Troy space

Disadvantages of this system that inverse kinematics would solve:

  • If the user moves into a position Troy cannot reach, joint angles are simply capped, and no extra effort is made to reach closer to that position
    • Example: Beckon action: Troy can't bring his lower arms up past 90 degrees, and if the user does so, he will simply stay still at that position
    • It is possible though that some of these positions are not even possible for Troy to emulate
  • Differences between Troy's build and a humans mean some actions are not interpreted as intended
    • Example: Moving hands together: As a human's forearms are about as long as their torso is wide, their hands can easily reach each other. However, Troy's torso is roughly twice as wide as his forearms are long, and so the angles to make his hands move together are very different from those of a human.
      • As a slight compensation for this, the current retargetting algorithm attempts to adjust hand positions when in front of the torso to more closely reconcile these differences. However, this compensation is based on several assumptions that might not always hold true.

Wiimote

The Wiimote was chosen to use as an input device for when the user is standing away from the computer. This gives them fine tuned ability to control recording, playback, and other functionality while still giving them free range of motion as they move and act out animations.

It was chosen over using gesture or voice based input, primarily due to the granularity of these two input styles. Detecting both gestures and voice is difficult to do in a responsive manner, and requiring the user to make a specific gesture to start and stop recording would complicate recording if an actions similar to that control was desired.

Feedback

When first starting to use the Wiimote, I noticed that at times, it was difficult to make sure that the Wiimote was indeed connected and communicating with the interface. It was also sometimes difficult to know whether a button was actually functioning and doing something. For this reason, when a button is pressed on the Wiimote that is accepted by the interface, a very slight rumble is sent to provide feedback saying the action was successful. If a button is pressed that is not currently assigned to an action, then a longer rumble is sent to indicate that the action failed.

hcmi/kinect-design-decisions.txt · Last modified: 2014/08/13 22:39 by tlund1
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0