Lightning-AI / pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

Home Page:https://lightning.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Please make it simple!

chengmengli06 opened this issue · comments

Outline & Motivation

One thing tensorflow falls behind pytorch is its too complex designs, while pytorch is much simpler. But when I start to use pytorch-lightning, I feel that it is another tensorflow. So I beg your guys make thing simple. For a simple saving checkpoint function, I search the code from ModelCheckpoint, to trainer.save_checkpoint, and then checkpoint_connector.save_checkpoint, and then trainer.strategy.save_checkpoint, where is the end? How to ensure correctness under such complex designs? Please make it simple!

Pitch

The strategy design in tensorflow is too complex, DDP is just a simple all reduce of gradients. But in strategy or keras, things become very complex, the function call stacks are very deep that we could hard understand where is it doing the actual all reduce? Even the users spend weeks of time, they may not figure out what you are actually doing because there is call from module a to b, then to c, then to a, then to b, then I give up.

Additional context

I suggest to implement things as what it is, stop over encapsulation, please follow the design patterns of pytorch and caffe, stop making simple functions complicated.

cc @justusschock @awaelchli

You can manually checkpoint with trainer.save_checkpoint("filepath"). You can use the ModelCheckpoint callback to take care of automatically creating checkpoints.

The implementation of lightning allows for simplified use across different accelerators, multiple devices and many tedious details from strategy, callbacks, logging and more are taken care of automatically leading to large reductions in boilerplate code from the user.