**Improving Few-Shot Part Segmentation using Coarse Supervision**
[Oindrila Saha](http://oindrilasaha.github.io), [Zezhou Cheng](http://sites.google.com/site/zezhoucheng/), [Subhransu Maji](http://people.cs.umass.edu/~smaji/)
_University of Massachusetts - Amherst_
![Figure [coarsesup_splash]: **Overview of CoarseSup** : The graphical model over an image x, parts labels y and coarse labels y1, . . . , yn, is shown on the left. Coarse labels such as bounding boxes, figure-ground masks, or keypoints are easier to annotate than per-pixel part labels, and our learning framework can utilize datasets with coarse labels to train part segmentation models, outperforming previous work - PointSup.](./graph_coarsesup.jpg)
A significant bottleneck in training deep networks for part segmentation is the cost of obtaining detailed annotations. We propose a framework to exploit coarse labels such as figure-ground masks and keypoint locations that are readily available for some categories to improve part segmentation models. A key challenge is that these annotations were collected for different tasks and with different labeling styles and cannot be readily mapped to the part labels. To this end, we propose to jointly learn the dependencies between labeling styles and the part segmentation model, allowing us to utilize supervision from diverse labels. To evaluate our approach we develop a benchmark on the Caltech-UCSD birds and OID Aircraft dataset. Our approach outperforms baselines based on multi-task learning, semi-supervised learning, and competitive methods relying on loss functions manually designed to exploit coarse supervision.
PUBLICATION
==========================================================================================
**Improving Few-Shot Part Segmentation using Coarse Supervision**
Oindrila Saha, Zezhou Cheng, Subhransu Maji
European Conference on Computer Vision (ECCV), 2022.
[[arXiv](https://arxiv.org/abs/2204.05393)]
DATA
===============================================================================
We release the PASCUB Bird Part segmentation dataset presented in the paper [here](https://drive.google.com/drive/folders/1zCOlttyhv1z9AALGTBtBVXYLtBO3-clQ?usp=sharing)
CODE
===============================================================================
The code for reproducing our results alongwith pretrained models is available [here](https://github.com/oindrilasaha/CoarseSup)
RESULTS
===============================================================================
![Figure [qualitative results]: Visualization of some results on PASCUB Birds dataset. The examples bordered in red show two of the failure cases.](./birds_viz.png)
**Table 1**: *Comparison of our method with baselines on the CUB Part testing set. (Please refer to the paper for more details)*
Method | Random |Keypoint | ImageNet
:--------------------:|:--------:|:---------:|:----------:
Fine-tuning | 29.88 | 41.12 | 45.37
MultiTask | 36.96 | 38.00 | 41.27
PseudoSup | 30.77 | 41.62 | 46.01
PointSup | 35.18 | 46.45 | 46.76
Ours | **37.98**| **49.25** |**48.05**
ACKNOWLEDGEMENTS
===============================================================================
The research is supported in part by NSF grants #1749833 and #1908669. Our experiments were performed on the University of Massachusetts GPU cluster funded by the Mass. Technology Collaborative.
**Cite us:**
(embed bib.txt height=115px here)