mir-group / flare

An open-source Python package for creating fast and accurate interatomic potentials.

Home Page:https://mir-group.github.io/flare

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Setting `max_iterations` does not seem to matter.

ThePauliPrinciple opened this issue · comments

Describe the bug
Setting max_iterations does not seem to matter.

To Reproduce
Based on the Al example in https://colab.research.google.com/drive/1rZ-p3kN5CJbPJgD8HuQHSc7ecmwZYse6 and inside the colab environment changing the max_iterations to 1 and train_hyps: [1,inf] in the yaml file, more than 1 iteration of hyper parameter optimization was observed.

Expected behavior
I expected max_iterations to set the maximum number of optimization steps.

stdout
Done precomputing. Time: 6.437301635742188e-05
Hyperparameters:
[2.0e+00 9.6e-02 5.0e-02 1.0e-03]
Likelihood gradient:
[-5.97071350e+00 -4.08824401e-04 -5.58846922e+03 -5.25636069e+03]
Likelihood:
595.7040251262185


Hyperparameters:
[ 1.99921398  0.09599995 -0.68570348 -0.69098249]
Likelihood gradient:
[-4.32751809e+00  5.94168133e-04  4.13208452e+02  8.68227885e+00]
Likelihood:
-178.0522934903641


Hyperparameters:
[ 1.99977764  0.09599998 -0.15812563 -0.19475725]
Likelihood gradient:
[-5.76411146e+00  1.16199835e-04  1.77287920e+03  3.08049561e+01]
Likelihood:
242.6429863343157


Hyperparameters:
[ 1.9999397   0.096      -0.00643517 -0.05208138]
Likelihood gradient:
[-5.99611870e+00  3.46504098e-02  3.13670009e+04  1.15202984e+02]
Likelihood:
1108.2431560056393


Hyperparameters:
[ 1.99998488  0.096       0.03585022 -0.0123089 ]
Likelihood gradient:
[-5.98282890e+00 -7.85719340e-04 -7.74096843e+03  4.86836557e+02]
Likelihood:
673.7791101408982


Precomputing KnK for hyps optimization
Done precomputing. Time: 0.0073621273040771484
Hyperparameters:
[ 1.99998488  0.096       0.03585022 -0.0123089 ]
Likelihood gradient:
[-1.22840935e+01 -7.89390086e+01 -1.47486563e+04  9.68582513e+02]
Likelihood:
1327.7133986633962


Hyperparameters:
[ 1.99914548  0.09060589 -0.97196407  0.05387688]
Likelihood gradient:
[   7.22638247  -66.03626756  586.9299994  -218.4869108 ]
Likelihood:
-514.7678474034591


Hyperparameters:
[ 1.99975645  0.09453204 -0.23841811  0.00570302]
Likelihood gradient:
[    7.62906939   -48.05697882  2340.97867862 -1938.19633388]
Likelihood:
304.7341813958734


Hyperparameters:
[ 1.99993087  0.0956529  -0.02900133 -0.00804993]
Likelihood gradient:
[-1.49334194e+01  7.14446692e+01  1.80765470e+04  1.47480182e+03]
Likelihood:
1444.4010489711973


Hyperparameters:
[ 1.99996048  0.09584321  0.00655636 -0.01038509]
Likelihood gradient:
[-1.63325029e+01 -7.95490659e+01 -6.70436034e+04  1.15465848e+03]
Likelihood:
2177.3733097211352


Hyperparameters:
[ 1.99993713  0.09569311 -0.02148864 -0.00854331]
Likelihood gradient:
[-1.89296984e+01 -1.63432306e+01  2.40435451e+04  1.39549836e+03]
Likelihood:
1599.7744888700918


Hyperparameters:
[ 1.99995497e+00  9.58077877e-02 -6.17839203e-05 -9.95046260e-03]
Likelihood gradient:
[ 1.33440496e+02  5.81299936e+01 -1.11332207e+10  1.20571656e+03]
Likelihood:
-339767.33208436455


Hyperparameters:
[ 1.99996048  0.09584321  0.00655634 -0.01038509]
Likelihood gradient:
[-2.11143048e+01 -1.94378927e+01 -6.70466337e+04  1.15465865e+03]
Likelihood:
2177.3746298495626


Hyperparameters:
[ 1.99996048  0.09584321  0.00655632 -0.01038509]
Likelihood gradient:
[-1.99761999e+01 -4.20347674e+01 -6.70482565e+04  1.15465874e+03]
Likelihood:
2177.3759507802033


Hyperparameters:
[ 1.99995773  0.0958255   0.00324727 -0.01016778]
Likelihood gradient:
[-3.17010520e+01 -1.60943858e+01 -7.20899310e+04  1.17993430e+03]
Likelihood:
2429.530197320569


Hyperparameters:
[ 1.99995773  0.0958255   0.00324726 -0.01016778]
Likelihood gradient:
[-2.65929577e+01 -4.64543845e+01 -7.21216341e+04  1.17993439e+03]
Likelihood:
2429.53094884337


Hyperparameters:
[ 1.99995773  0.0958255   0.00324725 -0.01016778]
Likelihood gradient:
[-2.90717041e+01 -2.75059593e+01 -7.20910585e+04  1.17993447e+03]
Likelihood:
2429.531700564369


Hyperparameters:
[ 1.99995635e+00  9.58166431e-02  1.59273224e-03 -1.00591190e-02]
Likelihood gradient:
[-1.42704981e+01 -2.68083242e+01  3.59910229e+05  1.19286162e+03]
Likelihood:
2372.984493185718


Hyperparameters:
[ 1.99995712  0.09582158  0.0025159  -0.01011975]
Likelihood gradient:
[-2.17234731e+01 -4.39776479e+01 -2.41700772e+04  1.18563213e+03]
Likelihood:
2468.9546519791297


Hyperparameters:
[ 1.99995696  0.09582054  0.00232142 -0.01010697]
Likelihood gradient:
[-2.81764136e+01 -2.80867191e+00  6.13757072e+03  1.18715272e+03]
Likelihood:
2470.8998152690756


Precomputing KnK for hyps optimization
Done precomputing. Time: 0.006013393402099609
Hyperparameters:
[ 1.99995696  0.09582054  0.00232142 -0.01010697]
Likelihood gradient:
[-2.46709897e+01 -5.81064745e+01  1.32201816e+05  1.78080994e+03]
Likelihood:
3669.3402446768005


Hyperparameters:
[1.99976849 0.09537666 1.01222969 0.00349688]
Likelihood gradient:
[   18.4203174    -75.61405461  -850.84400392 -4616.27705323]
Likelihood:
-752.4722706853757


Hyperparameters:
[ 1.99989784  0.09568131  0.31910314 -0.0058398 ]
Likelihood gradient:
[   10.96247343  -111.15470367 -2662.21413006  2896.94034854]
Likelihood:
227.02418583611643


Hyperparameters:
[ 1.99993988  0.09578032  0.0938379  -0.00887421]
Likelihood gradient:
[  -16.75757053    26.41872001 -8861.70131013  1983.8392221 ]
Likelihood:
1249.1637393486324


Hyperparameters:
[ 1.99995278  0.0958107   0.024708   -0.00980542]
Likelihood gradient:
[-2.37286168e+01 -2.61760740e+01 -3.23998339e+04  1.82834067e+03]
Likelihood:
2338.8988635886562


Hyperparameters:
[ 1.99995616  0.09581867  0.0065738  -0.01004969]
Likelihood gradient:
[-1.99503211e+01 -1.98125045e+01 -9.91247205e+04  1.79029535e+03]
Likelihood:
3333.124250652385


Hyperparameters:
[ 1.99995681  0.09582019  0.00312916 -0.01009609]
Likelihood gradient:
[-3.28294780e+01  2.14910580e+01 -5.70259175e+04  1.78264945e+03]
Likelihood:
3681.503286358271

Hi @ThePauliPrinciple , the max_iterations controls how many iterations are done with BFGS. However, within each iteration of BFGS, multiple line search steps are taken, and each step will compute the likelihood and/or gradient. That is why you see there are many times of calculation of likelihood gradient.