Featured image for GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks

GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks

Abstract

Understanding complex three-dimensional (3D) structures of graphs is essential for accurately modeling various properties, yet many existing approaches struggle with fully capturing the intricate spatial relationships and symmetries inherent in such systems, especially in large-scale, dynamic molecular datasets. These methods often must balance trade-offs between expressiveness and computational efficiency, limiting their scalability. To address this gap, we propose a novel Geometric Tensor Network (GotenNet) that effectively models the geometric intricacies of 3D graphs while ensuring strict equivariance under the Euclidean group E(3). Our approach directly tackles the expressiveness-efficiency trade-off by leveraging effective geometric tensor representations without relying on irreducible representations or Clebsch-Gordan transforms, thereby reducing computational overhead. We introduce a unified structural embedding, incorporating geometry-aware tensor attention and hierarchical tensor refinement that iteratively updates edge representations through inner product operations on high-degree steerable features, allowing for flexible and efficient representations for various tasks. We evaluated models on QM9, rMD17, MD22, and Molecule3D datasets, where the proposed model consistently outperforms state-of-the-art methods in both scalar and high-degree property predictions, demonstrating exceptional robustness across diverse datasets, and establishes GotenNet as a versatile and scalable framework for 3D equivariant Graph Neural Networks.

Understanding GotenNet

GotenNet\text{GotenNet} introduces several groundbreaking innovations in molecular modeling that enable state-of-the-art performance while maintaining computational efficiency.

Performance and Results

QM9 Dataset: The QM9 dataset is a well-studied dataset that contains 130k small molecules and twelve distinct molecular properties.

The interactive radar chart on the right demonstrates GotenNet\text{GotenNet}‘s performance across all 12 molecular properties in the QM9 dataset. You can hover over different points to see the exact performance metrics for each property. The visualization compares three variants of our model (S, B, and L) against state-of-the-art baselines.

Key observations from the visualization: The outer edge of the radar chart represents better performance (lower error), where our largest model GotenNetL\text{GotenNet}_L consistently reaches these outer edges, demonstrating superior performance. Notably, even GotenNetS\text{GotenNet}_S, our smallest model variant, outperforms many baselines, highlighting the efficiency of our approach. For interactive exploration, you can click on different models in the legend to show or hide their performance curves, enabling direct visual comparisons.

Notable improvements shown in the visualization (click to compare): All our model variations achieve strong performance improvements. Our largest model, GotenNetL\text{GotenNet}_L, achieves state-of-art performance with α (polarizability): 30% reduction in error, μ (dipole moment): 33% improvement, HOMO-LUMO gap: 32% better accuracy, and internal energy at 0K: 24% error reduction. Even our base model GotenNetB_B shows significant gains with 18% reduction in polarizability error, 28% improvement in dipole moment, 29% better HOMO-LUMO accuracy, and 23% reduction in U0U_0 error. Our smallest model GotenNetS_S maintains competitive performance with 18% reduction in polarizability error, 25% improvement in dipole moment, 27% better HOMO-LUMO accuracy, and 16% reduction in U0U_0 error.

This interactive visualization helps demonstrate how GotenNet\text{GotenNet} achieves consistent improvements across all properties, rather than trading off performance between different targets.

Attention Visualization on QM9

Visualization Settings

Current: 2.0 seconds
Current: 5.0 Å

The molecule visualizer above can be used to discover the computed attention between atoms within a given molecule. One can visualize the attention map for different layers and different attention heads within those layers. It is also possible to discover different attention patterns for different molecular properties.

MD22 Dataset: MD22 consists of molecular dynamics (MD) trajectories of four major classes of biomolecules and supramolecules, ranging from a small peptide with 42 atoms to a double-walled nanotube with 370 atoms. The simulation trajectories are sampled at 400K and 500K with a resolution of 1fs. Potential energy and forces are computed using the PBE+MBD level of theory.

The MD22 results demonstrate consistent improvements across different molecular scales. For Tetrapeptide, GotenNetB\text{GotenNet}_B improves upon MACE’s energy predictions by 19.2% and QUINNet’s force predictions by 11.9%. On DHA, we see 34.1% improvement over ViSNet-LSRM in energy and 16.8% over Equiformer in forces. For larger molecules like Stachyose, GotenNetB\text{GotenNet}_B achieves 36.2% improvement over ViSNet-LSRM in energy and 21.4% over QUINNet in forces. The improvements become even more pronounced for complex structures: AT-AT shows 27.8% better energy predictions than ViSNet-LSRM and 23.6% better force predictions than QUINNet. For AT-AT-CG-CG, we improve upon ViSNet-LSRM by 15.5% in energy and 27.3% in forces. On the challenging Buckyball catcher, GotenNetB\text{GotenNet}_B outperforms Equiformer by 22.4% in energy and ViSNet-LSRM by 22.3% in forces. Finally, for the largest system, the Double-walled nanotube, we achieve remarkable 35.8% improvement over ViSNet in energy and 31.3% over Equiformer in force predictions.

Notably, even our smaller model variant GotenNetS\text{GotenNet}_S demonstrates remarkable improvements over existing baselines. For Stachyose, GotenNetS\text{GotenNet}_S achieves 28.8% improvement over ViSNet-LSRM in energy predictions while maintaining better force predictions than QUINNet with a 5.7% improvement. On DHA, it shows a substantial 26.5% improvement over ViSNet-LSRM in energy while matching Equiformer’s performance in forces. For larger systems like AT-AT-CG-CG, GotenNetS\text{GotenNet}_S outperforms ViSNet-LSRM by 15.1% in energy and 22.5% in forces. Most impressively, even with the challenging Double-walled nanotube, our smaller model achieves 29.6% improvement over ViSNet in energy predictions while maintaining better force predictions than Equiformer.


Please consider citing the works below if this project is helpful:

@inproceedings{aykent2025gotennet,
author = {Aykent, Sarp and Xia, Tian},
booktitle = {The Thirteenth International Conference on Learning Representations},
year = {2025},
title={{GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks}},
url = {https://openreview.net/forum?id=5wxCQDtbMo},
howpublished = {https://openreview.net/forum?id=5wxCQDtbMo},
}

Citation Formats

APA

Aykent, S., & Xia, T. (2025). GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks. The Thirteenth International Conference on Learning Representations. https://openreview.net/forum?id=5wxCQDtbMo
Copied!

Vancouver

Aykent S, Xia T. GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks. In: The Thirteenth International Conference on Learning Representations [Internet]. 2025. Available from: https://openreview.net/forum?id=5wxCQDtbMo
Copied!

Harvard

Aykent, S. and Xia, T. (2025) “GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks,” in The Thirteenth International Conference on Learning Representations. Available at: https://openreview.net/forum?id=5wxCQDtbMo.
Copied!

MLA

Aykent, S., & Xia, T. (2025). GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks. The Thirteenth International Conference on Learning Representations. https://openreview.net/forum?id=5wxCQDtbMo
Copied!

Download Citations