peterwittek / somoclu

Massively parallel self-organizing maps: accelerate training on multicore CPUs, GPUs, and clusters

Home Page:https://peterwittek.github.io/somoclu/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How does batch parallel training actually work?

Tenceto opened this issue · comments

I was reading the documentation for Somoclu and it is stated that in order to make the algorithm parallelizable, a batch training mode has to be followed. The equation to update the weights is given in the 4th page of the document.

I don't understand why that formulation works. Shouldn't the factor (x- w) be used instead of just x to achieve convergence? What troubles me is that past values of the weights are not used in the update rule.

Does that equation make sense? If it does, why?