Paper Detail

R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning

Zirui Zhang, Haoyu Dong, Kexin Pei, Chengzhi Mao

arxiv Score 15.6

Published 2026-03-26 · First seen 2026-03-27

General AI

Abstract

Robust perception and reasoning require consistency across sensory modalities. Yet current multimodal models often violate this principle, yielding contradictory predictions for visual and textual representations of the same concept. Rather than masking these failures with standard voting mechanisms, which can amplify systematic biases, we show that cross-modal inconsistency provides a rich and natural signal for learning. We introduce RC2, a reinforcement learning framework that resolves internal conflicts by enforcing cross-modal cycle consistency. By requiring a model to perform backward inference, switch modalities, and reliably reconstruct the answer through forward inference, we obtain a dense, label-free reward. This cyclic constraint encourages the model to align its internal representations autonomously. Optimizing for this structure mitigates modality-specific errors and improves reasoning accuracy by up to 7.6 points. Our results suggest that advanced reasoning emerges not only from scaling data, but also from enforcing a structurally consistent understanding of the world.

Workflow Status

Review status
pending
Role
unreviewed
Read priority
now
Vote
Not set.
Saved
no
Collections
Not filed yet.
Next action
Not filled yet.

Reading Brief

No structured notes yet. Add `summary_sections`, `why_relevant`, `claim_impact`, or `next_action` in `papers.jsonl` to enrich this view.

Why It Surfaced

No ranking explanation is available yet.

Tags

No tags.

BibTeX

@article{zhang2026r,
  title = {R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning},
  author = {Zirui Zhang and Haoyu Dong and Kexin Pei and Chengzhi Mao},
  year = {2026},
  abstract = {Robust perception and reasoning require consistency across sensory modalities. Yet current multimodal models often violate this principle, yielding contradictory predictions for visual and textual representations of the same concept. Rather than masking these failures with standard voting mechanisms, which can amplify systematic biases, we show that cross-modal inconsistency provides a rich and natural signal for learning. We introduce RC2, a reinforcement learning framework that resolves intern},
  url = {https://arxiv.org/abs/2603.25720},
  keywords = {cs.AI, cs.CV},
  eprint = {2603.25720},
  archiveprefix = {arXiv},
}

Metadata

{}