One of the initial goals of using the Kinect was to enable a less cumbersome way to retarget a user's movements to fit Troy's model than the existing code from Honda. When I (Tyler) started working on looking at how to use the Kinect, I spent a few days playing with it and the OpenNI/NITE middleware to see what the best way would be to simplify it's use in other components.

## Kinect Integration

In the end, I decided to use a variation on the Entity/Component System paradigm. Essentially, there is a base Kinect class that provides the low level interface to the raw Kinect data, and then any functionality on top of that is provided by way of classes that extend the KinectComponent abstract class. These classes can access the low level data from the Kinect without having to worry about initializing the Kinect, and multiple components can be built using the same Kinect to receive usage data. Other components can also access data from other, lower level components.

Examples of this design are the KinectImage and KinectSkeleton components. The KinectImage component, when added to the Kinect, receives the color and depth data from the sensor, builds QImages based on that data, and then signals that a new QImage has been loaded. This enables applications to display the image data from the Kinect simply by creating a Kinect object, adding the KinectImage component to it, and then linking to the signals it sends.

    //...
// Flush this out with the full main and update functions or something
kinect = new kinect::Kinect();

connect(image, SIGNAL(imageChanged(QImage*)), this, SLOT(updateImage(QImage*)));
connect(image, SIGNAL(depthMapChanged(QImage*)), this, SLOT(updateDepth(QImage*)));
//...

The broader example is the KinectSkeleton. It provides the base interface to the NITE middleware, and the skeleton tracking functionality it provides. It is structured in such a way as to allow other components to be further built on top of it, for example if they want to search for specific poses or track a hand and watch for gestures. Internally, the KinectSkeleton simply receives the depth data from the Kinect and passes it to the NITE middleware, and then remaps the NITE callbacks into Qt signals. This way, using the data generated by the Kinect is as intuitive as possible, and all tricks and inconveniences with using the Kinect's low level interface are encapsulated away inside a container.

## Troy Retargeting

First was figuring out exactly what angles Troy should allow, and what human positions should map to them. Some of this work had already been done, but making human motions map to Troy seem intuitive took some additional foresight (for example, not allowing the elbow to bend backwards). In the end, these were the basic ranges of Troy's joints are as follows:

 Shoulder Forward -45° ~ 180° Shoulder Out -10° ~ 180° Arm Twist -180° - 180° (full range of motion, as the direction of the shoulder influences what the offset of this rotation is) Elbow In -180° - 0° (shouldn't be able to bend backwards)

So with the headache of getting data from the Kinect done, I began working out a way to turn the generic skeletal data provided by NITE into the joint angles that Troy requires. Here I went through several methods of converting the angles, but in the end I settled on using vector mathematics between the joint positions returned by NITE.

For each joint, this vector math generally followed one of two basic premises: either the angle was a rotation angle (shoulder forward or arm twist) or an extension angle (shoulder out or elbow in). In the case of the rotation angles, the math involved projecting the vector of the arm segment in consideration on the plane of rotation, and then calculating the angular difference between the 0° rotation vector on that plane and the projected vector. Then the sign was calculated by dividing the plane into positive and negative regions, and checking to see which side the projected vector was on.

In the second case of extension angles, there was no need to project the arm segment onto another plane, as the plane of rotation simply followed the arm wherever it went. So for these angles, it was simply a matter of getting the angular difference between the 0° vector and the arm segment, and then again dividing the plane into positive and negative regions, and deciding which it was in.

In the end, the main difficulty was orienting the lower arm segments properly for all positions of the upper arm. For example, when the upper arm is angled in towards the chest at a negative angle, the orientation vectors for the lower arm are normally inverted, which isn't what should happen. So these special cases were handled and made to work.

At present, when the joint angles are limited by the given values, motion is relatively smooth and acurate. Soon I hope to have a filter in place that will smooth things even futher, and give warnings when joints are moved into trouble areas, and when they are changed too quickly.

## The Aftermath

The whole purpose of this project was to create a simple way to receive the joint angles for a robot with Troy's configuration in order to use them to move and program Troy. The following is a simple example and explanation of how to begin receiving information about Troy's joints:

    #include <QObject>
// In addition to these header files, you must also link to Kinect.lib, or include the Qt Kinect source code in your project
// The KinectConfig.xml file should also be in the working directory, or somewhere the program can find it when running.
// It contains the settings to initialize the Kinect. I'll try to remove this dependency soon.
#include "Kinect/kinect.h"
#include "Kinect/kinectskeleton.h"
#include "Kinect/skeleton.h"
#include "Kinect/troymapper.h"

class TroyListener extends QObject {
private:
Q_OBJECT
public slots:
// This is required because of a bug in Qt's handling of namespaces in slots and signals, meaning the names must match
//pretty much perfectly
typedef kinect::TroyMapper::Joint Joint;
}

void TroyListener::receiveJointAngle(Joint joint, float angle) {
// Do whatever with the angle received (send it to Troy, send it to a simulator, filter it, etc.)
qDebug() << angle;
}

// Exists for the same reason as the other typedef (I'm pretty sure this one is also required)
typedef kinect::TroyMapper::Joint Joint;
typedef kinect::Skeleton Skeleton;

int main (int argc, char* argv) {
// Create the Kinect interface
kinect::Kinect* kinect = new Kinect();

// Create the skeletal tracker

// Create the TroyMapper
kinect::TroyMapper* troy = new TroyMapper();

// Connect the skeleton tracker to the TroyMapper
connect(skeleton, SIGNAL(skeletonChanged(Skeleton* skeleton)), troy, SLOT(updateJoints(Skeleton* skeleton)));

// Create a TroyListener
TroyListener* listener = new TroyListener();

// Connect the TroyMapper to the TroyListener
connect(troy, SIGNAL(jointChanged(Joint joint, float angle)), listener, SLOT(receiveJointAngle(Joint joint, float angle)));

// Do whatever while we wait......
while (true) {}

// Clean up
kinect->deleteLater();
// The Kinect owns the KinectSkeleton, and any other components we add, so we don't need to delete them
troy->deleteLater();
listener->deleteLater();

// And yes, I know that since no Qt event loop is running most of this code wouldn't really work if you compiled it as-is.
// But it illustrates what you would need to do in a simple application once the Qt framework is already initialized.
return 0;
}