karpathy / micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

_backward as lambdas?

ondras opened this issue · comments

Hi @karpathy,

congratulations on this repo/talk. The educational value is truly immense. Good job!

Can you please explain the main motivation for _backward methods implemented as lambdas, as opposed to (one) regular method that starts with a hypothetical switch (self._op) and contains implementation for all arithmetic cases?

@ondras, That is a very interesting question. IMHO, it is less about lambda, and more about closure. It seems that the alternative approach will need some tedious unpacking, while the implementation here has the advantage to be more concise, as all the variables are conveniently available (typically self/x, other/y, and out) for gradient updates.

For the alternative implementation, please refer to my repo (WIP): https://github.com/steve-z-seattle/undergrad. The alternative implementation does not use lambdas/closure. Working in progress though.

At this point, I do think the closure implementation is more elegant.

Hi @steve-z-seattle,

thanks for your opinion. I will have a look at your repo.