PRoNTo dataset
Point-Cloud Shape Retrieval of Non-Rigid Toys - SHREC17
UPDATES:

→ 16-01-2017:
Registration is open.

→ 16-01-2017:
10 sample models are available now.

→ 25-01-2017:
Registration is closed.

→ 23-01-2017:
Test dataset is available now.

→ 09-02-2017:
Results are available now.

Introduction
University of York

Point clouds colored by coordinates Y * Z .

As shown in the figure above, 3D point-cloud objects are the immediate result of 3D scans of real 3D objects. Given the need to compare 3D non-rigid shapes based directly on a rough 3D scan of the object we have created a point-cloud dataset which aims at evaluating how methods perform on the non-rigid point-cloud retrieval task. In this way, our dataset is the first of its kind. In this task, a shape retrieval method has to worry about common scanning problems, mainly caused by object self-occlusions, however, it cannot be classified as a part-based shape retrieval task because the gross structure of the shape is always presented (only fine details are missing). Therefore, we organize this track to promote the development of non-rigid 3D shape retrieval of point clouds. It has the objective of evaluating shape retrieval methods that can be computed directly and efficiently from point clouds. For this, we present a dataset which has been scanned by our group and contains 100 3D non-rigid point clouds with approximately 4K points each.

Scanning problems occurs when the laser hits first one part of the model leaving another part of the model unseen (in the same direction pointed by the scan head). These objects with missing parts are intended to test signatures robustness against scanning problems since real-time interaction with 3D scanners/objects are expected to be part of our everyday lives in the near future as suggested by the growth of Virtual and Augmented Reality and the new 3D platform from Microsoft, which will include many 3D features in its products.

Task details

A query consists in retrieving one model from the dataset, which means computing the dissimilarity between the query shape and all other shapes. The task consists in computing the dissimilarity between every two point-cloud models in the dataset, which means retrieving all models. The output format is a dissimilarity matrix of size NxN (N=100 in our case) where the position (i,j) gives the difference between models i and j. The ground truth is the classification of all objects. Ideally, same-class shapes should have no dissimilarities or the smallest dissimilarities.

Collection

Our dataset contains 100 different models that are derived from 10 different real objects. Each object has been scanned in 10 distinct poses by articulating them around their joints. Objects were scanned using the Head & Face Color 3D Scanner. The point clouds acquired by these scans suffer from missing parts resulted from self-occlusions of the objects. The scanned point clouds were randomly rotated across XYZ axis and they contain in average 4K uncolored points each. The file format was chosen as the ASCII Object File Format (*.off), where we only have vertex information.

Evaluation

The evaluation methodology is as follows: We will ask for participants to submit up to 6 dissimilarity matrices per group. These matrices can be result of different algorithms or different parameter settings, at the choice of the participant. An example of dissimilarity matrix file can be seen here. Seven standard evaluation measures will be computed: Precision-and-Recall curve (PR curve), mean Average Precision (mAP), E-Measure (E), Discounted Cumulative Gain (DCG), Nearest Neighbor (NN), First-Tier (FT) and Second-Tier (ST). More details in the Evaluation webpage.

Procedure

The following list desribes the activities that need to be followed by the participants:

  • Participants must register by sending a registration email to pronto-group@york.ac.uk with participant names and affiliations. Please register as soon as possible.
  • The complete dataset (test dataset) will be made available in this website when the contest starts. Some models will be available to download earlier (sample dataset) so that participants can start testing their methods.
  • Participants submit their dissimilarity matrices for the test dataset. As described above, participants can send up to 6 dissimilarity matrices.
  • Participants write a one-page descripton of their method and submit to the organizers. Ideally in LaTeX format (.tex).
  • Organizers verify submited results and perform their evaluation automatically using PSB Utility Code and Evaluation Software from SHREC'15 Track: Non-rigid 3D Shape Retrieval, 3DOR'15, 2015.
  • The organizers release the results of all participants' runs on this page, showing all evaluation measeures.
  • The track results are combined into a joint paper, published in the proceedings of the Eurographics Workshop on 3D Object Retrieval
  • The paper is presented at the Eurographics Workshop on 3D Object Retrieval (23-24, April, 2017).

Schedule
January 16, 2017Start of registration. Call for participation.
January 16, 2017→ 10 sample models of the database will be released online.
January 22, 2017
January 23, 2017→ Distribution of the test database. Participants can start the retrieval.
January 25, 2017End of registration (25/01/2017 23:59:59 UTC time). Please register before this date.
January 29, 2017
February 7, 2017→ Deadline for participants to submit dissimilarity matrices and a one-page description of their method(s).
January 2, 2017
February 9, 2017→ Release of the evaluation scores on this website.
February 3, 2017
February 16, 2017→ Deadline for participants to submit their final one-page descriptions for the proceedings.
February 21, 2017→ Track is finished, and results are ready for inclusion in a track report.
February 28, 2017→ Camera ready track papers are submitted.
April 23-24, 2017→ SHREC'17 in the EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR).
Organizers

Frederico A. Limberger and Richard C. Wilson

Cite this paper

@inproceedings {3dor.20171056,
booktitle = {Eurographics Workshop on 3D Object Retrieval},
editor = {Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov},
title = {{Point-Cloud Shape Retrieval of Non-Rigid Toys}},
author = {Limberger, F. A. and Wilson, R. C. and Aono, M. and Audebert, N. and Boulch, A. and Bustos, B. and Giachetti, A. and Godil, A. and Saux, B. Le and Li, B. and Lu, Y. and Nguyen, H.-D. and Nguyen, V.-T. and Pham, V.-K. and Sipiran, I. and Tatsuma, A. and Tran, M.-T. and Velasco-Forero, S.},
year = {2017},
publisher = {The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {10.2312/3dor.20171056}
}

Publication

DOI website
Pre-print paper


University of York    
Department of Computer Science, University of York
Deramore Lane, Heslington, York, YO10 5GH, UK
Tel: +44 1904 325500
Fax: +44 1904 325599