Current Research

Designing tools to support therapists to use robots in therapy for children with autism. Robot animation is one task that therapists would find useful, but traditional tools take a lot of training. We plan to simplify humanoid character animation by using motion capture (via the Kinect sensor) and a full-body tracking user interface to edit the captured animations.

Prior Research Ideas

Due to the discovery of prior work that is very similar, we are not pursuing this line of research further.

Affordance annotations for complex robotic manipulation.

From previous research we learned that constructing a detailed and accurate 3D model of a robot's surroundings is difficult with present technology. Others have obtained much better 3D models than we have, but it takes a tremendous amount of work and can depend on the environment. We would like to use an accurate 3D model to enable more autonomy on the robot.

Instead of algorithmically improving the 3D model, we use annotations. A user can place icons in an augmented virtuality user interface that indicate the position of an interesting object. Essentially we are using humans to provide scene understanding. Once position markers are in place, the user assigns actions to each of the objects. For example, a “put that there” action is a common and straightforward manipulation. Actions are assigned with the virtual markers, and then the robot can carry out the plan.

Another component of the interface is haptic feedback. While positioning annotations, haptic feedback warns the user when they attempt to position an object in a way that violates some basic constraints, such as below the ground plane or inside an obstacle. Feedback can also help in positioning objects relative to each other, such as stacking exactly on top.

Example structures that could be used for a block stacking task. The bottom row has extremely difficult structures that likely require multiple manipulators.

We use a challenging block stacking task to evaluate affordance annotations. The user must stack six wooden blocks in several different structures, including towers, pyramids, and arches. The image to the right shows some sample structures, increasing in difficulty from left to right, top to bottom. An additional task is observation. The blocks have letters, numbers, and pictures of simple objects on the sides, and the user must place the blocks in the correct position and orientation within the structure. Some blocks may be initially oriented in a way that the required face of a block is hidden from view, requiring the operator to rotate the block to find the appropriate face. Action annotations can be used to rotate blocks and then position them to build the structure.

Video notes: This is a rough prototype of the affordance annotations interface as described. <br>

Invalid Link

The environment has two wooden blocks, and the goal is to stack one block on top of another. First, the user places a block annotation where a block is in the real world, then commands the robot arm to move to that location and grasp the block. Second, an annotation is placed for the second block. Third, another annotation is placed for where the grasped block should be placed (on top of the second block). Fourth, the user commands the robot arm to move the block to that position and release the block.

The top left view is the user interface, the top-right view shows a live video of the arm and operator, the bottom left view shows the gripper-camera video (which is not always synchronized with the other views), and the bottom right view shows the video from the stereo camera.

Previous Research

Ecological Augmented Virtuality interfaces for mobile manipulation robots.

Remote and mobile robotic manipulation is difficult primarily because of limited data given by sensors. Operators have difficulty understanding a situation because of imperfect information and difficult-to-interpret data representation. As a result, mental workload is high and performance decreases. We are working on an ecological augmented virtuality interface to lessen workload and increase situation awareness without loss in performance. Experimental results show that the visualization reduces operator mental workload and increases situation awareness, although it also slows performance.

Near-future work will include evaluation of head tracking and using a haptic controller.

thumbnail|2009 Augmented virtuality manipulation user interface
thumbnail|2009 Augmented virtuality manipulation user interface with video overlay
thumbnail|2008 Augmented virtuality manipulation user interface
thumbnail|Robot arm for user studies

Video (Xvid format, no audio, 2:36, 20 MB)

Research Interests

Affordances: A good model of physical affordances can help robots to generalize behaviors with novel objects and situations. Similarly, social affordances can help robots to interact with humans in a way that appears more human-like.

Assistive Robotics: Enable therapists of children with learning disabilities to program robots. Robots are currently difficult to program, and simpler tools, such as Lego NXT, are either platform-specific or are too general-purpose for the therapists' needs. We can develop a tool that enables therapists to choreograph and design robot behaviors. This will allow therapists to customize therapy for individuals and to evolve the robot's behaviors as each child's needs evolve.

Multiple manipulators: Coordinating multiple robotic manipulators to perform cooperative tasks, especially in unstructured environments.

Publications

Atherton, J. and Goodrich, M. Visual Robot Choreography for Clinicians. Proceedings of Conference on Collaborative Technologies and Systems (CTS), Philadelphia, PA, 2011. Presentation slides.

Atherton, J. and Goodrich, M. Perception by proxy: humans helping robots to see in a manipulation task. Proceedings of the 6th international conference on Human-robot interaction, Lausanne, Switzerland, 2011.

N. Giullian, D. Ricks, J. Atherton, M. Colton, M. Goodrich, and B. Brinton. Detailed Requirements for Robots in Autism Therapy. 2010 IEEE International Conference on Systems Man and Cybernetics (SMC), Istanbul, Turkey, 2010.

J. A. Atherton and M. A. Goodrich. Supporting remote manipulation with an ecological augmented virtuality interface. In Proceedings of the AISB Symposium on New Frontiers in Human-Robot Interaction, Edinburgh, Scotland, 2009.

J. A. Atherton, B. Hardin, and M. A. Goodrich. Coordinating a multi-agent team using a multiple perspective interface paradigm. In Proceedings of the AAAI 2006 Spring Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before, Stanford University, California, March 27-29, 2006.

hcmi/alan-atherton.txt · Last modified: 2015/03/19 16:55 by ryancha
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0