litian96 / TERM

Tilted Empirical Risk Minimization (ICLR '21)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

关于Federated learning的实验代码的疑问

Wongcheukwai opened this issue · comments

commented

你好,
请问一下TERM/fair_flearn/flearn/trainers/tilting.py第68行时,前面是否需要加一个1/t*log呢?因为alg4好像是这样写的

We estimate $e^{t\tilde{R}}$ throughout the code (which is equivalent to track \tilde{R}, but slightly more convenient to implement). If adding an additional 1/t * log in Line 68 (say v = 1/t log (X)), then the normalizer in the weights is e^{tv} = X. So we directly maintain a sequence of X, instead of v.

commented

Thank you for your reply.
I understand you are estimating $e^{t\tilde{R}}$ just like algorithm 4. However, in line 68, when you calculate the $tilde{R}t$, shouldn't you add 1/t * log in front of (estimates * 0.5 + new * 0.5)? Just like the third last line in algorithm 4.

Line 68 here (https://github.com/litian96/TERM/blob/master/fair_flearn/flearn/trainers/tilting.py#L68) corresponds to e^{t\tilde{R}_t} in Algorithm 4 in the paper. So the two are equivalent: (a) estimates = 1/t \log (estimates * 0.5 + new * 0.5), and use e^{t * estimates} as the demonimator in the weights w_{t,x} = \frac{e^{tf(x;\theta}}{e^{t\tilde{R}_t}}, and (b) estimates = estimates * 0.5 + new * 0.5, and use estimates as the demoninator. Line 68 is the latter. Does this answer your question?

commented

yes, thank you for your patience!