CUNY-CL / yoyodyne

Small-vocabulary sequence-to-sequence generation with optional feature conditioning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Torch and Lightning 2.0.0

kylebgorman opened this issue · comments

The library does not work with Lightning (and one suspects that Torch itself is also an issue) > 2.0.0. The first issue I encounter when running yoyodyne-train with no arguments is related to a change in how Lightning command-line arguments are handled---I suspect there are at least a few more.

So that the library is not broken at head---which I consider unacceptable---I have pinned as follows:

pytorch-lightning>=1.7.0,<2.0.0
torch>=1.11.0,<2.0.0

What we need to do is just to migrate to 2.0.0, by fixing Lightning (and Torch, if any) bugs until things work, and then re-pin these two dependencies >=2.0.0. I have initially assigned myself, but I would welcome help.

@Adamits I have assigned this to you since you wanna take a stab.

There is a new interface for CLI stuff (probably an imporvement over the somewhat ad hoc one they gave us):

https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html
https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_expert.html

Looks like an improvement, but some work to make things conformant.

I have read these docs and have the basic lay of the land now. The migration should go in two steps:

  1. First, we should implement subclasses of LightningDataModule (this is available in Lightning pre 2.0 too). The train_dataloader method should return the training data dataloader (naturally) and similarly with dev_dataloader and test_dataloader. (Even though we don't need access to train, dev, and test in any one process, we can probably set it up so we can use the same one for training and inference.) Most of the logic of get_datasets and get_loaders would be moved into this too. This is thus mostly just cleanup.
  2. Then, we have to move arg parsing to LightningCli and migrate Torch and Lightning to >=2.0. This is a bit magical (it introspects to find flags for the dataset and models) but our house is mostly in order so it might just work. At this point there are probably one or two low-level Torch issues we'll hit up against too and can handle those as they arise...
commented

Sounds good, I think I understand. Though I feel a bit annoyed that we have to use LightningCli instead of standard python libraries... I will try to prioritize it this week.

Sounds good, I think I understand. Though I feel a bit annoyed that we have to use LightningCli instead of standard python libraries... I will try to prioritize it this week.

I'm working on the data module part of things in the next few days---haven't thought about the model side yet.

FWIW, what LightningCLI is is basically a tool that can introspect to populate the argparse parser...which should help us enormously with the kind of bug we saw with not passing around the label smoothing...

@Adamits believes it is possible to migrate to Torch 2 while staying on the 1.13 branch of PTL. The ability to complete this migration in two steps (one to Torch 2, then later migrating to PTL 2 focusing on CLI changes) would make this much easier.

Draft of the migration to PyTorch 2 in #173. Testing now---focusing on the transformer since we can use the causal mask flag there, hopefully.