[robotics-worldwide] [meetings][CfP] CfP to RSS2020 Workshop on Advances & Challenges in Imitation Learning for Robotics

Yuchen Cui yuchencui at utexas.edu
Mon Mar 9 12:43:00 PDT 2020


*Call for Papers*

*Advances & Challenges in Imitation Learning for Robotics*
RSS 2020 Full-Day Workshop
July 12, 2020
Oregon State University at Corvallis, Oregon, USA

Website: https://urldefense.com/v3/__https://sites.google.com/utexas.edu/rss-2020-imitation-learning__;!!LIr3w8kk_Xxm!8z9jCR_bSbNb-mDe0vqssGn5Re2l7BrOpWlU5LUk3JhUFTk1wb8Xk2douoy4GKUxOklipZq4$ 


*Overview*

As robots and other intelligent agents increasingly address complex
problems in unstructured settings, programming their behavior is becoming
more laborious and expensive, even for domain experts. Frequently, it is
easier to demonstrate a desired behavior rather than to manually engineer
it. Imitation learning seeks to enable the learning of behaviors from fast
and natural inputs such as task demonstrations and interactive corrections.

However, human generated time-series data is often difficult to interpret,
requiring the ability to segment activities and behaviors, understand
context, and generalize from a small number of examples. Recent advances in
imitation learning algorithms for both behavior cloning and inverse
reinforcement learning—especially methods based on training deep neural
networks—have enabled robots to learn a wide range of tasks from humans
with relaxed assumptions. However, real-world robotics tasks still pose
challenges for many of these algorithms and it is important for
researchers, as a community, to identify the greatest challenges facing
imitation learning for robotics.

*Topics of Interest*

This workshop will bring together area experts and student researchers to
discuss the advances that have been made in the field of imitation learning
for robotics and the major challenges for future research efforts.

The topics will be discussed include:

   - Interactive Imitation Learning
   - Multi-modal Imitation Learning
   - Deep Inverse Reinforcement Learning and Optimal Control
   - Cognitive Models for Learning from Demonstration and Planning
   - One/Few-shot Imitation Learning
   - Learning by Observing Third-Person Demonstrations
   - Learning from Non-Expert Demonstrations

We invite several types of contributions:

   - Full length paper: 8 pages max (excluding citations)
   - Position paper: 8 pages max (excluding citations)
   - Short paper: 4 pages max (excluding citations)

*Important Dates*

   - Submission deadline for papers: April 9, 2020
   - Notification of acceptance: April 16, 2020
   - Camera-ready version: June 15, 2020
   - Workshop Day: July 12, 2020

*Invited Speakers*

   - Anca Dragan, University of California Berkeley
   - Maya Cakmak, University of Washington
   - Yuke Zhu, University of Texas at Austin & NVIDIA Research
   - Byran Boots, University of Washington & NVIDIA Research
   - Abhinav Gupta, Carnegie Mellon University & Facebook AI Research
   - Oliver Kroemer, Carnegie Mellon University

*Organizers*

   - Akanksha Saran (asaran at cs.utexas.edu), University of Texas at Austin
   - Yuchen Cui (yuchencui at utexas.edu), University of Texas at Austin
   - Nick Walker (nswalker at cs.washington.edu), University of Washington
   - Andreea Bobu (abobu at berkeley.edu), University of California, Berkeley
   - Ajay Mandleka (amandlek at stanford.edu), Stanford University
   - Danfei Xu (danfei at cs.stanford.edu), Stanford University
   - Scott Niekum (sniekum at cs.utexas.edu), University of Texas at Austin


More information about the robotics-worldwide mailing list