Paper Detail

PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders

Niccolò Cavagnero, Narges Norouzi, Gijs Dubbelman, Daan de Geus

huggingface Score 6.8

Published 2026-03-26 · First seen 2026-03-27

General AI

Abstract

Vision Foundation Models (VFMs) pre-trained at scale enable a single frozen encoder to serve multiple downstream tasks simultaneously. Recent VFM-based encoder-only models for image and video segmentation, such as EoMT and VidEoMT, achieve competitive accuracy with remarkably low latency, yet they require finetuning the encoder, sacrificing the multi-task encoder sharing that makes VFMs practically attractive for large-scale deployment. To reconcile encoder-only simplicity and speed with frozen VFM features, we propose the Plain Mask Decoder (PMD), a fast Transformer-based segmentation decoder that operates on top of frozen VFM features. The resulting model, the Plain Mask Transformer (PMT), preserves the architectural simplicity and low latency of encoder-only designs while keeping the encoder representation unchanged and shareable. The design seamlessly applies to both image and video segmentation, inheriting the generality of the encoder-only framework. On standard image segmentation benchmarks, PMT matches the frozen-encoder state of the art while running up to ~3x faster. For video segmentation, it even performs on par with fully finetuned methods, while being up to 8x faster than state-of-the-art frozen-encoder models. Code: https://github.com/tue-mps/pmt.

Workflow Status

Review status
pending
Role
unreviewed
Read priority
later
Vote
Not set.
Saved
no
Collections
Not filed yet.
Next action
Not filled yet.

Reading Brief

No structured notes yet. Add `summary_sections`, `why_relevant`, `claim_impact`, or `next_action` in `papers.jsonl` to enrich this view.

Why It Surfaced

No ranking explanation is available yet.

Tags

No tags.

BibTeX

@misc{cavagnero2026pmt,
  title = {PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders},
  author = {Niccolò Cavagnero and Narges Norouzi and Gijs Dubbelman and Daan de Geus},
  year = {2026},
  abstract = {Vision Foundation Models (VFMs) pre-trained at scale enable a single frozen encoder to serve multiple downstream tasks simultaneously. Recent VFM-based encoder-only models for image and video segmentation, such as EoMT and VidEoMT, achieve competitive accuracy with remarkably low latency, yet they require finetuning the encoder, sacrificing the multi-task encoder sharing that makes VFMs practically attractive for large-scale deployment. To reconcile encoder-only simplicity and speed with frozen },
  url = {https://huggingface.co/papers/2603.25398},
  keywords = {Vision Foundation Models, encoder-only models, image segmentation, video segmentation, Transformer-based decoder, frozen VFM features, Plain Mask Decoder, Plain Mask Transformer, code available, huggingface daily},
  eprint = {2603.25398},
  archiveprefix = {arXiv},
}

Metadata

{}