Meta's Segment Anything Model (SAM)

It is unlikely anyone will outperform Meta’s new Segment Anything Foundation Model. It is free and available in the links below [1,2]. In light of this new development, are there any rule changes?

Thanks!

[1] https://segment-anything.com/
[2] GitHub - facebookresearch/segment-anything: The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

1 Like

The competition focuses on specialization, SAM is more of a generalized model. As of now, there are no guidelines on how to use this as a downstream. Out of curiosity, have you tried it already in the competition?

Yes, it is amazing. Try it here: Segment Anything | Meta AI

I have not made any submissions though. SAM’s zero-shot generalization, gained from extensive pretraining on diverse data, enables it to outperform models trained on smaller datasets. Its robust understanding of various objects allows it to accurately segment and recognize unfamiliar objects without additional training, making it the truly strongest contender in this competition.

1 Like

I have been experimenting with Segment Anything over the last week. I agree that this model is a game changer, but the project itself is not that trivial to implement for this competition, as it doesn’t output class labels and also doesn’t include their clip prompting.

There are some projects to use it for e.g. mask refinement, but their results don’t seem to be SOTA atleast on ADE20k and Cityscapes.

1 Like