HazyResearch / safari

Convolutions for Sequence Modeling

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

configs for Hyena Wikitext103 experiments

xiaobo-guo opened this issue · comments

Your work is excellent! I am trying to follow your work and facing some problems. I wonder if you may share the config for the wiki103 dataset of the Hyena. I try to conduct experiments with 125-slim but the test perplexity is higher than the reported result (about 21 with hyena). And I am wondering whether the removal of flash-atten will influence the result or not.

Can you share the config? Wikitext is quite sensitive to a few hyperparameters. Flash attention will not affect the result for Hyena.

Thanks for your response.

I attach the config file
config.txt

You should set dropouts to 0.2 as a first step. After you get to sub 19 ppl you will be in tuning range.

Thank you, shall I also set the order to be 3 in the Heyna layer?

Can you share the config? Wikitext is quite sensitive to a few hyperparameters. Flash attention will not affect the result for Hyena.

Could you please put the configs you used in configs/experiment/wt103? That would be super helpful!

Thank you, shall I also set the order to be 3 in the Heyna layer?

Did you reproduce the 19 ppl result using dropout=0.2? I still get 22

Thank you, shall I also set the order to be 3 in the Heyna layer?

Did you reproduce the 19 ppl result using dropout=0.2? I still get 22

I set the dropout to 0.2 and the order to 3 and get about 20 but still can not get the reported result

You can look at this config for an independent reproduction that gets to sub 19. Let me know if after this you still have issues with the loss being too high, and I'll rerun experiments in the new codebase.

Thank you, shall I also set the order to be 3 in the Heyna layer?

Did you reproduce the 19 ppl result using dropout=0.2? I still get 22

I set the dropout to 0.2 and the order to 3 and get about 20 but still can not get the reported result

Question:
Did you change the attain_layer_indx? It seems in your attached config, there is attention layer at layer 1 and 8 (which is inherited from the base.yaml)

You can look at this config for an independent reproduction that gets to sub 19. Let me know if after this you still have issues with the loss being too high, and I'll rerun experiments in the new codebase.

Thanks for the helpful reference. However, I checked that repo and the released [log] from S5 (https://wandb.ai/jimmysmith1919/S5_ICL/reports/Hyena-red-and-Hyena-S5-blue-on-WikiText-103--Vmlldzo0MTkwODEx?accessToken=pk0zw5w75uo1s4zkn3kh7koum902t4q2yzbm28xk0olzzgxuskoq0g1iyauixlob) which shows that Hyena with test perplexity 19.094.

It would be very helpful if you can share the detailed configuration of Hyena on wikitext-103.