[robotics-worldwide] ICRA 2008 Tutorial: Human-Robot Interaction: Conducting User Evaluations

Julie A. Adams julie.a.adams at vanderbilt.edu
Sun Jan 13 08:57:33 PST 2008


Human-Robot Interaction: Conducting User Evaluations

Full-day Tutorial 
2008 IEEE International Conference on Robotics and Automation (ICRA)

Greg Trafton (Naval Research Laboratory) and Julie A. Adams (Vanderbilt University)
May 20th, 2008 
Pasadena, California

Overview
The last 5 years has led to a growth in human-robot interaction across all levels of AI, robotics, and cognitive science. The focus in this emerging field is how people interact with the robot itself. The ability to present sound results for HRI frequently requires that systems be evaluated with an appropriate experiment. Designing experiments has traditionally been conducted those with training in Psychology and/or Human Factors. As engineers and roboticists become involved in the development of systems requiring evaluation, it is important that these individuals conduct sound user evaluations that provide valid results. 

The design of an experiment is critical to obtaining the required results. The initial step is determining what research question to answer. The proper definition of the research question drives the design of the experiment. Once the question is defined, a number of experimental factors are defined such as what is manipulated, what is measured, predictions, the basic design (e.g. within or between subjects), the number of participants, the characteristics the participants should possess, analysis methods, etc. 

One critical component of designing and experiment within HRI is the decision to use simulated or real robots. This is a critical question that can affect the generalizability of the results: the pluses and minuses of using high-fidelity simulations vs. actual physical robots must be taken into account. When conducting user evaluations with real robots, particularly groups of robots, there are many factors to consider and the experimenter must be prepared to handle them. Our tutorial will address these topics and many more, including: 

· Creating a research question (much trickier than most people think).

· Determining whether to use simulated robots or real robots (with a focus on what types of questions can be answered with simulations).

· Determining what will be manipulated (with a focus on internal and external validity and types of designs).

· Determining when a field experiment vs. a controlled laboratory evaluation is appropriate.

· Determining what will be measured (basics like reaction time and accuracy as well as more complicated measures like eye-tracking and protocol analysis).

· Determining what the predictions are (focused on theory-based predictions)

· Determining who the subjects will be (ease of recruitment vs. ecological validity).

· Determining how to analyze the experiment (not an in-depth tutorial on statistics, but focused on exploratory data analysis and very simple statistics like t-tests).

· Determining the practical evaluation considerations when using real robots (focused on the common control problems that can introduce evaluation confounds).

In addition to these large topics, a number of smaller topics will be covered (e.g. IRB approval and the advantages of multiple experiments). Each primary topic will include a background lecture and an associated hands-on activity.

This will be a working tutorial and the participants are required to bring a research question they want answered by experimental design (e.g., "I want to know whether my system that performs spatial perspective-taking improves collaboration"). We will work with several research questions and design experiments around them. The participants will be grouped with similar interests (e.g., situation awareness, manipulators, autonomy, the physical appearance of the robot, etc.).  Each group will work together throughout the design process to collaboratively create a plan for an experiment.   

Important dates
TBD, 2008: Last day to register for this tutorial 
May 20, 2008: Tutorial 
May 21-23, 2008: ICRA 2008 Conference 

Participation
Who should attend? Anyone who wants to learn the basics of experimental design for conducting HRI evaluations. There are no prerequisites. A shorter version of this tutorial was offered at HRI07 with attendees from diverse backgrounds (computer scientists, mechanical engineers, electrical engineers, HCI researchers). 

The number of attendees is limited to 30. 

Tutorial Organizer Bios and Contact Information
Greg Trafton
Intelligent Systems Section
Naval Research Laboratory 
Washington, DC USA

E-mail: trafton 'at' itd.nrl.navy.mil

Greg Trafton is section head of the Intelligent Systems Section at the Naval Research Laboratory in Washington, DC. He is a cognitive scientist with interests in HRI, interruptions/resumptions, and the cognition of complex visualizations. Greg received his BS in computer science (second major in psychology) from Trinity University and his Ph.D in cognitive psychology from Princeton University. 

Julie A. Adams
Electrical Engineering and Computer Science Department
Vanderbilt University 
Nashville, TN USA

E-mail: julie.a.adams 'at' vanderbilt.edu

Julie A. Adams is an Assistant Professor of Computer Science and Computer Engineering in the Electrical Engineering and Computer Science department at Vanderbilt University. She conducts research in human-robotic interaction and distributed algorithms for multiple robotic systems. She previously worked in Human Factors for Honeywell, Inc. and the Eastman Kodak Company. Julie received her BS in Computer Science and BBA in Accounting from Siena College and her MSE and Ph.D in Computer and Information Sciences from the University of Pennsylvania. 

 

Julie A. Adams, Ph.D.
Assistant Professor of Computer Science and Computer Engineering
Electrical Engineering and Computer Science Department
Vanderbilt University
Nashville, TN 37212

Email: julie.a.adams at vanderbilt.edu
Phone: (615) 322 - 8481
FAX: (615) 343 - 5459





More information about the robotics-worldwide mailing list