ECCV 2020
End-to-End Object Detection with Transformers
We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. (Read more)
All The Latest

October 14, 2020
Innovation comes through exploration and challenging our assumptions

August 31, 2020
Are Labels Necessary for Neural Architecture Search?

August 24, 2020