
In order to run the demo, use the following command: The person tracking model is trained on COCO-17 and CrowdHuman, while the latter model is trained on COCO-17 and VOC12.Ĭurrently, both models used in demos use EMM as its motion model, which performs best among different alternatives. Try SiamMOT demoįor demo purposes, we provide two tracking models - tracking person (visible part) or jointly tracking person and vehicles (bus, car, truck, motorcycle, etc). Please refer to INSTALL.md for installation instructions.
Amazon truck tracking 720p#
Moreover, SiamMOT is efficient, and it runs at 17 FPS for 720P videos on a single modern GPU. Finally, SiamMOT also outperforms the winners of ACM MM’20 HiEve Grand Challenge on HiEve dataset. We carry out extensive quantitative experiments on three different MOT datasets: MOT17, TAO-person and Caltech Roadside Pedestrians, showing the importance of motion modelling for MOT and the ability of SiamMOT to substantially outperform the state-of-the-art.

To explore how the motion modelling affects its tracking capability, we present two variants of Siamese tracker, one that implicitly models motion and one that models it explicitly. SiamMOT includes a motion model that estimates the instance’s movement between two frames such that detected instances are associated.

In particular, we introduce a region-based Siamese Multi-Object Tracking network, which we name SiamMOT. In this paper, we focus on improving online multi-object tracking (MOT).
