Task
The next generation of perception should understand complex free-form object descriptions, rather than a fixed set of categories. To accelerate this vision, we propose a novel & challenging benchmark. Checkout our task description and paper for more details
Toolkit
How do you evaluate your method? We provide a simple Python toolkit that lets you interact with the data, visualize samples, get statistics and evaluate your method
Challenge
We are organizing a challenge in conjunction with our ECCV'24 workshop. We'd love to see you participate and compare your method against others ...
Results of last year's challenge with CVPR'23 are here.
Announcements
[5/29/24] Evaluation server for the workshop challenge is online!
[04/30/24] Our 2nd OmniLabel workshop got accepted at ECCV'24. Check out the workshop website and see you soon in Milano, Italy!
[06/27/23] The leaderboard is open again - anyone can evaluate their models on the test set now!
[06/06/23] The challenge ended! We thank all the participants for their efforts in pushing the state-of-the-art in language-based detection. Please find the results here!
[05/04/23] Test set of the OmniLabel challenge was released! Download instructions are here.
[04/28/23] Our paper describing the benchmark is online on arXiv
[04/05/23] IMPORTANT UPDATE: We changed the track definitions to better match training dataset settings from existing works like GLIP or MDETR.
[03/29/23] Evaluation server for the workshop challenge is online!!! Also, we updated the validation set with cleaner annotations (see download site). So get the new annotations, the updated code from github, and participate in the challenge.
[02/07/23] Initial release of our novel benchmark and corresponding dataset. Please explore the dataset with some samples and download the full dataset to evaluate your own model. Along with the dataset, we also released a Python toolkit to work with the data (visualize, evaluate, statistics, ...)
[12/15/22] The OmniLabel workshop got accepted to CVPR 2023! This workshop will use this benchmark for an exciting new challenge ... stay tuned for more details soon.