lean-dojo / ReProver

Retrieval-Augmented Theorem Provers for Lean

Home Page:https://leandojo.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Runtime Init Error

ethanlabelle opened this issue · comments

Here is the trace:

Traceback (most recent call last):
  File "/scratch/elabelle/ReProver/prover/evaluate.py", line 237, in <module>
    main()
  File "/scratch/elabelle/ReProver/prover/evaluate.py", line 214, in main
    pass_1 = evaluate(
  File "/scratch/elabelle/ReProver/prover/evaluate.py", line 110, in evaluate
    prover = DistributedProver(
  File "/scratch/elabelle/ReProver/prover/proof_search.py", line 392, in __init__
    tac_gen = RetrievalAugmentedGenerator.load(
  File "/scratch/elabelle/ReProver/generator/model.py", line 126, in load
    return load_checkpoint(cls, ckpt_path, device, freeze)
  File "/scratch/elabelle/ReProver/common.py", line 471, in load_checkpoint
    model = model_cls.load_from_checkpoint(path, strict=False)
  File "/scratch/elabelle/miniconda3/envs/ReProver/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1537, in load_from_checkpoint
    loaded = _load_from_checkpoint(
  File "/scratch/elabelle/miniconda3/envs/ReProver/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 91, in _load_from_checkpoint
    model = _load_state(cls, checkpoint, strict=strict, **kwargs)
  File "/scratch/elabelle/miniconda3/envs/ReProver/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 144, in _load_state
    obj = cls(**_cls_kwargs)
TypeError: RetrievalAugmentedGenerator.__init__() missing 2 required positional arguments: 'max_inp_seq_len' and 'max_oup_seq_len'

I have been working on a branch that is 50 commits behind main and recently merged the new commits. This is a new error- seems the parameter from the config *.yaml can be used, but for now I will just add values of 1024 for these 2 arguments.

You might be using the new code to load a model checkpoint trained by the old code? If so, this error is expected. We used to have the max_seq_len hyperparameter but later split it into max_inp_seq_len and max_oup_seq_len .

You might be using the new code to load a model checkpoint trained by the old code? If so, this error is expected. We used to have the max_seq_len hyperparameter but later split it into max_inp_seq_len and max_oup_seq_len .

I guess they might be using the checkpoints from https://huggingface.co/kaiyuy/leandojo-pl-ckpts/tree/main. Do you have plan to update them?

You might be using the new code to load a model checkpoint trained by the old code? If so, this error is expected. We used to have the max_seq_len hyperparameter but later split it into max_inp_seq_len and max_oup_seq_len .

I guess they might be using the checkpoints from https://huggingface.co/kaiyuy/leandojo-pl-ckpts/tree/main. Do you have plan to update them?

Oh, indeed. I'll re-train the model and update those checkpoints. I've been a bit swamped recently, but I hope to do that in the next month. Before the checkpoints are updated, you can change the code manually in order to load the current checkpoints.

You might be using the new code to load a model checkpoint trained by the old code? If so, this error is expected. We used to have the max_seq_len hyperparameter but later split it into max_inp_seq_len and max_oup_seq_len .

I guess they might be using the checkpoints from https://huggingface.co/kaiyuy/leandojo-pl-ckpts/tree/main. Do you have plan to update them?

Oh, indeed. I'll re-train the model and update those checkpoints. I've been a bit swamped recently, but I hope to do that in the next month. Before the checkpoints are updated, you can change the code manually in order to load the current checkpoints.

Thank you for your devotion to open-source all the way.

should be fixed