lucidrains / vit-pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What is the meaning of num_landmarks in Nystrom-based Algorithm?

khawar-islam opened this issue · comments

I am implementing the algorithm in the vision transformer where we have mlp_dim parameters. In the Nystrom-based Algorithm, we have num_landmarks. I treated the num_landmarks parameter as mlp_dim but it does not work. Any guidance?

efficient_transformer = Nystromformer(
        dim=512,
        depth=20,
        heads=8,
        num_landmarks=256,
        dim_head=8
    )

VITs_Eff': ViT(
            GPU_ID=GPU_ID,
            loss_type=HEAD_NAME,
            image_size=112,
            patch_size=8,
            dim=512,
            num_classes=NUM_CLASS,
            transformer=efficient_transformer
        )

Traceback
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)