atomicarchitects / equiformer_v2

[ICLR'24] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations

Home Page:https://arxiv.org/abs/2306.12059

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can this model be used for molecule generation?

Atomu2014 opened this issue · comments

Hi,

I've heard of this strong model which can learn atomic coordinates. Now I want to adapt this model for my project, but I find the code is a bit complicated and hard to follow. (The paper also contains some concepts hard to understand)

I want to make sure, if this model can learn the following task:

**INPUTS**: protein_atom_pos, protein_atom_types, init_molecule_pos, init_molecule_types
**OUTPUTS**: molecule_pos, molecule_types

In another word, I want to learn the molecule given protein pocket, known as Structure-based Drug Design.

Please help me verify if this model suitable for SE(3)-invariant molecule learning, and indicate the relavant code pieces.

Thanks!

Hi @Atomu2014

Yeah. The model can be applicable to SE(3)-invariant molecule learning, and so are other SE(3)-equivariant networks.

I think it would be great for you to start with the example in the codebase to understand how the code is working.
To adapt to molecucle learning as mentioned by the inputs/outputs provided by you, you can change the outputs of EquiformerV2 (containing both energy and forces for OC20) to atom types and atomic positions. Energy and atom types are both type-0 vectors, and atomic positions and forces are type-1 vectors. Therefore, that would require minimal changes to the code.

Let me know if you need more details.