[robotics-worldwide] [journals] Call for Papers (Submission Deadline Extension): IEEE RA-Letters Special Issue on Benchmarking Protocols in Robotic Manipulation

Calli, Berk bcalli at wpi.edu
Sun Jul 21 14:59:29 PDT 2019

Call for Papers (Submission Deadline Extension): IEEE RA-Letters Special Issue on Benchmarking Protocols in Robotic Manipulation

Dear colleagues,

***We have extended the deadline for submissions to August 15th. This is the final deadline; there won’t be any other extensions.***

We would like to invite you to contribute to a new special issue in IEEE Robotics and Automation Letters (RA-L) on “Benchmarking Protocols in Robotic Manipulation”. The submission deadline is August 15th. You can find detailed information below, and at the IEEE RA-Letters website: 


Special Issue Editors:
Berk Calli, Worcester Polytechnic Institute (bcalli at wpi.edu)
Aaron Dollar, Yale University (aaron.dollar at yale.edu)
Maximo Roa, DLR - German Aerospace Center (maximo.roa at dlr.de)
Sidd Srinivasa - University of Washington (siddh at cs.uw.edu)
Yu Sun - University of South Florida (yusun at mail.usf.edu)



Benchmarks are crucial for analyzing the effectiveness of an approach against a common basis, providing a quantitative means for interpreting performance. Carefully designed and widely recognized benchmarks encourage the research community to focus on certain key research challenges, promote competition, foster a climate for novel solutions, and, therefore, contribute dramatically to the advancement of a field. While some robotics-related fields (such as object recognition and segmentation) actively utilize benchmarks, there are essentially no robotic manipulation benchmarks that are widely adopted by the research community despite their highly acknowledged necessity. 
Discussions within the robot manipulation research community via a number of workshops and similar meetings have identified some primary obstacles to the development and adoption of benchmarking procedures in our field, including:

- Lack of communication and agreement between researchers for the standards and characteristics of a benchmark
- Lack of widely utilized data sets that target manipulation research
- Lack of a reputable and central venue to distribute the benchmarks
- Lack of professional rewards to encourage researchers to develop and utilize benchmarks

This special issue seeks to help to break some of these barriers by encouraging collaborations among different research groups, encouraging the use of existing data sets, and boosting the visibility and dissemination of benchmarking procedures via a reputable publishing venue.


This special issue is dedicated to papers that propose and demonstrate novel and widely useful benchmarking protocols for robotic manipulation research. Submitted papers should focus on describing well-defined experimental procedures that are ready to be applied by other researchers in similar topic areas for quantifying performance of research approaches in robotic manipulation or sub-fields, including but not limited to:

- Manipulation planning (e.g. performance of grasp planners)
- Mechanism design (e.g. performance of robotic hands)
- Machine learning (e.g. learning manipulation abilities)
- Cognitive robotics (e.g. task representations)
- Etc.

One of the primary challenges in developing effective benchmarking procedures relates to the balance of specificity versus generality. For instance, high-level system performance metrics (such as that done within the Amazon Picking Challenges) can be used by the widest range of research groups, but tell little about the performance of the specifics of the approaches being used. For instance, was the good or bad performance due to the hardware design, the perception system, or the planning approach? On the opposite side of the spectrum, a very narrowly designed evaluation procedure that specifies the hardware platforms and many software subsystems might be able to speak very specifically to the effectiveness of specific grasp planner, for instance, but might not generalize and would only be used by researchers with that particular combination of subsystems available to them. It is therefore left to the authors of proposed benchmarking procedures to find a suitable middle ground to provide sufficient quantitative evaluation of specific research approaches while enabling as many researchers as possible to implement them.
In addition, authors of submitted papers are highly encouraged to:

- work in collaboration with multiple research groups from similar areas to develop benchmarking procedures (to avoid overfitting to a particular approach and boost the overall impact),
- make use of existing published data sets when possible (e.g. standard objects and models), unless they are specifically inappropriate
- utilize the provided templates to detail the protocol (procedure and constraints) and benchmark (reporting of results)
- provide multimedia files that illustrate and demonstrate the protocol
- report baseline experimental results obtained when applying the protocol on their own system(s)

Please refer to the templates and their explanations in the following website for the expected aspects to be addressed: 



Call for papers announced: February 1st, 2019
Submission opens: July 15th, 2019
Submission closes: August 15th, 2019
First Decision: November 9th, 2019 (at the latest)
Final Decision: January 13th, 2020 (at the latest)
Special Issue publication: Papers on IEEEXplore by February 11th, 2020 (at the latest), Special Issue Introduction within one month later.

More information about the robotics-worldwide mailing list