2nd Workshop and Challenge on
Target Re-identification and
Multi-Target Multi-Camera Tracking
In conjunction with CVPR 2019
June 2019, Long Beach
The 1st MTMCT and ReID workshop was successfully held in CVPR 2017.
In the past two years, the MTMCT and REID community has been fast growing. As such, we are organizing this workshop
for a second time, aiming to gather the state-of-the-art technologies and brainstorm for future directions.
We are especially welcoming ideas and contributions that embrace the relationship and future of MTMCT and ReID,
the two deeply connected domains.
A distinct feature of this workshop is a MTMCT and ReID challenge. The challenge consists of three tracks: multi-target
multi-camera tracking, person ReID with same train/test domains, person ReID with distinct train/test domains.
This workshop aims to encourage lively discussions on shaping future research directions for both academia and the industry.
This workshop will hold an international challenge on the topics of person re-identification and multi-target multi-camera tracking (MTMCT). There will be three
competition tracks which are briefly described below. The challenge webpage is under construction. MTMCT and REID Challenge
Track 1: Multi-Target Multi-Camera Tracking (MTMCT)
In this track, the large-scale training / validation sets will be based on the publicly available DukeMTMC dataset. In testing, participating teams will
have access to our testing dataset. Labels of this testing set are only known to the organizers. This testing set is part of the original DukeMTMC dataset,
so it has similar data distribution with the training / validation sets.
Track 2: Person Re-identification with the Same Train / Test Domain
This track will evaluate the scalability of ReID models. Training and validation data will be made available
from the DukeMTMC-reID dataset, which have 16,522 training images of 702 identities, 2,228 query images of the other 702 identities
and 17,661 gallery images (702 ID + 408 distractor ID). For testing, teams will access our testing set, which has a similar data
distribution with the training / validation sets. The test set is part of the original DukeMTMC dataset.
Track 3: Person Re-identification with Different Train / Test Domains
To evaluate the generalization ability of algorithms, this track will ask teams to use the publicly available Market-1501 dataset
for training / validation. This dataset has 32,668 annotated bounding boxes of 1,501 identities. The training and validation sets
are roughly equal in size. In testing, the same test set from Track 1 and Track 2 will be used. Since the training / validation
sets have different data distributions with the test set, Track 3 will assess the cross-domain generalization ability.
In target re-identification, we define a query as a bounding box of a targe-of-interest such as a pedestrian or a vehicle. We define a database
as a collection of image bounding boxes of arbitrary pedestrians or vehicles. Target re-identification aims to find all the database images
of the same target as the query. In multi-target multi-camera tracking, we use videos captured by multiple cameras. This task aims to place tight
bounding boxes to all the targets (e.g. pedestrians). The bounding boxes are partitioned into trajectories, a set of boxes that bound a
unique target, ordered by time.
In this full-day workshop, we will have invited speakers, poster sessions, oral presentations, as well as a summary of the challenge.
We encourage authors to explore the connections between the fields of ReID and MTMCT and some novel ideas. Examples of such questions are: Call
To submit a new paper to the workshop, you have to do so through the CMT Website (link TBD).
- How to define and improve the scalabilityof a MTMCT or ReID system?
- How to deal with large-scale indexing and optimization in ReID and MTMCT?
- How much do initial detections influence performance in MTMCT or ReID?
- How to improve the generalization ability of a MTMCT or ReID system?
- How and which ReID descriptors can be integrated into MTMCT systems?
- What can we learn by evaluating a MTMCT system in terms of ReID (and vice-versa)?
- How can ReID and MTMCT benefit each other?
- How can MTMCT and ReID capitalize on recent large-scale datasets?
- Do semantic attributes help in matching identities in ReID and MTMCT?
University of Maryland
Institute of Automation, Chinese Academy of Sciences
Ergys Ristani (Duke University)
Liang Zheng (Australian National University)
Xiatian Zhu (Vision Semantics Limited)
Jingdong Wang (Microsoft Research)
Shiliang Zhang (Peking University)
Shaogang Gong (Queen Mary University of London)
Qi Tian (Noah Ark’s Lab, Huawei)
Carlo Tomasi (Duke University)
Richard Hartley (Australian National University)
Program committee members: Dapeng Chen, Wei-shi Zheng, Francesco Solera, Weiyao Lin, Giuseppe Lisanti, Slawomir Bak,
Eyasu Zemene Mequanit, Li Zhang, Elyor Kodirov, Ying Zhang, Mang Ye, Zhedong Zheng, Zhun Zhong, Yonatan Tariku Tesfaye.
For any information, please send an Contacte-mail to Ergys Ristani, Xiatian Zhu, Liang Zheng and Jingdong Wang.