MMCP-GEN: A Modality-Extensible Diffusion Language Model for Conditional Protein Sequence Generation
Zeyu An ⋅ Wanyu Lin ⋅ Feng Tan ⋅ Shujun Wang
Abstract
Recent advances in diffusion-based language models (DLMs) have shown remarkable potential for de novo protein design. However, enabling controllable protein generation requires integrating diverse biological conditions, such as structure, functions, and chemical interactions, each represented in distinct modalities. Existing approaches often either support a single condition or treat multiple conditions through separate modality-specific encoders. This isolation limits cross-modal interaction, reduces generation quality, and complicates the incorporation of new conditions without retraining or redesigning the backbone. To address these limitations, we introduce $\textbf{MMCP-GEN}$, a DLM for $\textbf{M}$ulti-$\textbf{M}$odal, Multi-$\textbf{C}$ondition $\textbf{P}$rotein sequence $\textbf{GEN}$eration. $\textbf{MMCP-GEN}$ establishes a new paradigm for controllable protein generation under complex multimodal constraints. Its core is a modality-composable and extensible conditioning mechanism that fuses heterogeneous biological conditions via learnable queries and modality-indicator heads, enabling disentangled, extensible, and cross-modal condition integration without retraining the backbone. A joint generation–and–scoring objective further aligns sequence recovery with structural fidelity. Empirically, $\textbf{MMCP-GEN}$ achieves state-of-the-art performance across structure-, function-, and ligand-conditioned tasks, improving sequence recovery by up to 5\% and outperforming attentive baselines in diverse functional annotation tasks. These results establish $\textbf{MMCP-GEN}$ as a general and high-fidelity framework for controllable protein generation.
Successful Page Load