Benchmarking

Benchmarking Issues

"It was a well-known fact that the current practice of publishing research results in robotics made it extremely difficult not only to compare results of different approaches, but also to asses the quality of the research presented by the authors. Though for pure theoretical articles this may not be the case, typically when researchers claim that their particular algorithm or system is capable of achieving some performance, those claims are intrinsically unverifiable, either because it is their unique system or just because a lack of experimental details, including working hypothesis.

Often papers published in robotics journals and generally considered as good would not meet the minimum requirements in domains in which good practice calls for the inclusion of a detailed section describing the materials and experimental methods that support the authors' claims. This is, of course, partly due to the very nature of robotics research: reported results are tested by solving a limited set of specific examples on different types of scenarios, using different underlying software libraries, incompatible problem representations, and implemented by different people using different hardware, including computers, sensors, arms, grippers..."

http://www.robot.uji.es/EURON/en/index.htm

Related

http://wiki.robot-standards.org/index.php/Benchmarks

Photos

www.flickr.com