PEACCEL at EurIPS 2025 (Copenhagen): De-risking Generative Protein Design
26 December 2025

EurIPS 2025, Copenhagen, December 2025. PEACCEL presented new research at EurIPS 2025, a NeurIPS-endorsed European conference bringing world-class generative AI closer to the European ecosystem.
See: https://eurips.cc/
Why this matters for drug discovery and protein design
Generative models for proteins are moving quickly from academic benchmarks toward real R&D workflows: targeted binder design, enzyme engineering, developability-aware candidate generation, and design–build–test–learn (DBTL) acceleration. However, one bottleneck remains under-addressed: evaluation.
For pharma R&D teams, the key question is not only “Can the model generate plausible structures?”, but:
- Does it generate diverse candidates without collapsing?
- Are candidates stable enough to be worth expensive downstream work?
- Is generation efficiency acceptable when compute becomes a real budget line?
For investors, rigorous evaluation is equally central: it is a direct lever for de–risking, repeatability, and time-to-value in a platform that will be judged on outcomes and scalability, not just novelty.
What we presented: FrameBench
We introduce FrameBench: A Principled, Symmetry-Aware Evaluation of SE(3) Protein Generators.
See: https://openreview.net/forum?id=4sOs6ca1Tt
Core idea: FrameBench evaluates SE(3)-equivariant protein generators by making trade-offs explicit across three decision-critical dimensions:
- Diversity (how broad and non-redundant the generated designs are)
- Stability / domain validity (how plausible and usable designs are for downstream pipelines)
- Compute-normalized efficiency (what you get per unit of compute, which matters in real industrial cycles)
In the work, we instantiate this framework to compare representative SE(3) protein generative approaches (including diffusion and flow-matching families) and show how FrameBench surfaces actionable trade-offs that are often obscured by single-number reporting.
Where we presented it
FrameBench was presented at PriGM@EurIPS 2025 (Principles of Generative Modeling), a workshop focused on synthesizing foundational principles behind modern generative AI, exactly the type of venue where evaluation methodology and scientific rigor are central.
PriGM workshop: https://sites.google.com/view/prigm-eurips-2025/home Google Sites
A true team effort
This work reflects a team delivery across PEACCEL and our academic collaborators. We extend a warm thank-you to our partners across the USA, China and France for the sustained collaboration that made this result possible.
What this enables next
FrameBench is part of PEACCEL’s broader effort to make protein generation more predictable, more testable, and more deployable, so that generative models become reliable engines for discovery rather than research prototypes.
Concretely, we believe principled evaluation frameworks are a key step toward:
- Reducing model selection risk in production discovery pipelines
- Improving reproducibility across targets and design objectives
- Aligning scientific metrics with business outcomes (time, cost, probability of success)
Let’s connect
If you are:
- a pharma/biotech R&D leader exploring generative protein design (or evaluating vendors/models), or
- a VC/investor focused on AI-native drug discovery platforms with defensible methodology and scalable execution,
we would welcome a discussion.
For more information:
PEACCEL
Making the world disease free
Contact: AI-team@peaccel.com