ReID and MTMCT

duke duke

Dates:
Paper submission deadline: 2019 March 26
Final decisions to authors: 2019 April 6
Camera ready deadline: 2019 April 10

Challenge kick-off: 2019 March 1
Testing data release: 2019 May 15
Result submission deadline: 2019 June 1
Code release deadline: 2019 June 9

Navigation:

2nd Workshop and Challenge on

Target Re-identification and
Multi-Target Multi-Camera Tracking


In conjunction with CVPR 2019
June 2019, Long Beach, California

The 1st MTMCT and ReID workshop was successfully held at CVPR 2017. In the past two years, the MTMCT and REID community has been growing fast. As such, we are organizing this workshop for a second time, aiming to gather the state-of-the-art technologies and brainstorm future directions. We are especially welcoming ideas and contributions that embrace the relationship and future of MTMCT and ReID, two deeply connected domains. A distinct feature of this workshop is a MTMCT and ReID challenge. The challenge consists of three tracks: multi-target multi-camera tracking, person ReID with same train/test domains, and person ReID with distinct train/test domains. This workshop aims to encourage lively discussions on shaping future research directions for both academia and the industry.



MTMCT and REID Challenge

This workshop will hold an international challenge on the topics of person re-identification and multi-target multi-camera tracking (MTMCT). There will be three competition tracks which are briefly described below. The challenge rules are still being updated.

Track 1: Multi-Target Multi-Camera Tracking (MTMCT)
This workshop challenge aims to further push the state of the art in Multi-Target Multi-Camera Tracking. The MTMCT challenge features videos from 8 synchronized disjoint cameras recorded on the Duke University campus, with training/validation and test data. The goal is to correctly track people within and across cameras, with emphasis on correct identification. All participants are encouraged to submit their results on the DukeMTMCT Challenge on MOTChallenge. The participating entry with the highest multi-camera IDF1 score on the test-hard sequence will be selected as the winner. Sample code from the DeepCC tracker is available. Below we list the rules of the competition.


Track 2: Person Re-identification with the Same Train / Test Domain
This track will evaluate the scalability of ReID models. The setup consists of person images from the DukeMTMC dataset, categorized into a training/validation set and a test set. For each query image, participating entries are expected to rank the gallery set in order of decreasing similarity, with the intent that highly ranked images are co-identical to the query. The method with the highest mAP score will be selected as the winner, and rank-1 accuracy will be used as a tie-breaker if necessary.


Track 3: Person Re-identification with Different Train / Test Domains
To evaluate the generalization ability of algorithms, this track will ask teams to use the publicly available Market-1501 dataset for training / validation. In this challenge the setup consists of person images from the Market-1501 dataset used for training and validation, and a test set from DukeMTMC with secret labels (to be released). For each query image in the test set, participating entries are expected to rank the gallery set in order of decreasing similarity, with the intent that highly ranked images are co-identical to the query. The method with the highest mAP score will be selected as the winner, and rank-1 accuracy will be used as a tie-breaker if necessary.



Call for papers

In target re-identification, we define a query as a bounding box of a targe-of-interest such as a pedestrian or a vehicle. We define a database as a collection of image bounding boxes of arbitrary pedestrians or vehicles. Target re-identification aims to find all the database images of the same target as the query. In multi-target multi-camera tracking, we use videos captured by multiple cameras. This task aims to place tight bounding boxes to all the targets (e.g. pedestrians). The bounding boxes are partitioned into trajectories, a set of boxes that bound a unique target, ordered by time. In this full-day workshop, we will have invited speakers, poster sessions, oral presentations, as well as a summary of the challenge. We encourage authors to explore the connections between the fields of ReID and MTMCT and some novel ideas. Examples of such questions are:

Submission

To submit a new paper to the workshop, you have to do so through the CMT website. The workshop paper submissions should be in the same format as the main conference. Please refer to the CVPR 2019 author guidelines for more details.

Invited speakers

Rama Chellappa
University of Maryland


Kristen Grauman
Facebook


Ying Wu
Northwestern University


Liang Wang
Institute of Automation, Chinese Academy of Sciences



People involved

Organizers:

Ergys Ristani (Duke University)
Liang Zheng (Australian National University)
Xiatian Zhu (Vision Semantics Limited)
Jingdong Wang (Microsoft Research)
Shiliang Zhang (Peking University)
Shaogang Gong (Queen Mary University of London)
Qi Tian (Noah Ark’s Lab, Huawei)
Carlo Tomasi (Duke University)
Richard Hartley (Australian National University)

Program committee members: Dapeng Chen, Wei-shi Zheng, Francesco Solera, Weiyao Lin, Giuseppe Lisanti, Slawomir Bak, Eyasu Zemene Mequanit, Li Zhang, Elyor Kodirov, Ying Zhang, Mang Ye, Zhedong Zheng, Zhun Zhong, Yonatan Tariku Tesfaye, Wenhan Luo, Srikrishna Karanam, Hanxiao Wang.

Contact

For any information, please send an e-mail to Ergys Ristani, Xiatian Zhu, Liang Zheng and Jingdong Wang.