Supports inference and evaluation of multimodal algorithms GLIP and XDecoder, and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future.
Provides a gradio demo for image type tasks of MMDetection, making it easy for users to experience.
Exciting Features
GLIP inference and evaluation
s multimodal vision algorithms continue to evolve, MMDetection has also supported such algorithms. This section demonstrates how to use the demo and eval scripts corresponding to multimodal algorithms using the GLIP algorithm and model as the example. Moreover, MMDetection integrated a gradio_demo project, which allows developers to quickly play with all image input tasks in MMDetection on their local devices. Check the document for more details.
Preparation
Please first make sure that you have the correct dependencies installed:
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
MMDetection has already implemented GLIP algorithms and provided the weights, you can download directly from urls:
cd mmdetection
wget https://download.openmmlab.com/mmdetection/v3.0/glip/glip_tiny_a_mmdet-b3654169.pth
Inference
Once the model is successfully downloaded, you can use the demo/image_demo.py script to run the inference.
The above two weights are directly copied from the official website without any modification. The specific source is https://github.com/microsoft/X-Decoder
For convenience of demonstration, please download the folder and place it in the root directory of mmdetection.
(1) Open Vocabulary Semantic Segmentation
cd projects/XDecoder
python demo.py ../../images/animals.png configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts zebra.giraffe
(2) Open Vocabulary Instance Segmentation
cd projects/XDecoder
python demo.py ../../images/owls.jpeg configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts owl
cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py --weights ../../xdecoder_focalt_last_novg.pt --text "The larger watermelon. The front white flower. White tea pot."
(5) Image Caption
cd projects/XDecoder
python demo.py ../../images/penguin.jpeg configs/xdecoder-tiny_zeroshot_caption_coco2014.py --weights ../../xdecoder_focalt_last_novg.pt
cd projects/XDecoder
python demo.py ../../images/coco configs/xdecoder-tiny_zeroshot_text-image-retrieval.py --weights ../../xdecoder_focalt_last_novg.pt --text 'pizza on the plate'
The image that best matches the given text is ../../images/coco/000.jpg and probability is 0.998
We have also prepared a gradio program in the projects/gradio_demo directory, which you can run interactively all the inference supported by mmdetection in your browser.
Since semantic segmentation is a pixel-level task, we don't need to use a threshold to filter out low-confidence predictions. So we set model.test_cfg.use_thr_for_mc=False in the test command.
If you set the scale of Resize to (1024, 512), the result will be 57.69.
text mode is the RefCoCoDataset parameter in MMDetection, it determines the texts loaded to the data list. It can be set to select_first, original, concat and random.
select_first: select the first text in the text list as the description to an instance.
original: use all texts in the text list as the description to an instance.
concat: concatenate all texts in the text list as the description to an instance.
random: randomly select one text in the text list as the description to an instance, usually used for training.