CMA-ES / pycma

Python implementation of CMA-ES

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Test output changes with Python installation

nikohansen opened this issue · comments

I am running this file, adapted from a test case

import sys
print('Python ' + sys.version)

import numpy as np
print('numpy ' + np.version.version)
np.random.seed(1)
print('randn(seed=1) =', np.random.randn())

import cma
import cma.fitness_models as fm
from cma.fitness_transformations import Function as FFun  # adds evaluations attribute

# fm.Logger, Logger = fm.LoggerDummy, fm.Logger
surrogate = fm.SurrogatePopulation(cma.ff.elli)
for fitfun in [FFun(cma.ff.elli), FFun(cma.ff.sectorsphere)]:
    es = cma.CMAEvolutionStrategy(5 * [1], 2.2,
                   {'CMA_injections_threshold_keep_len': 1,
                    'ftarget':1e-9, 'verbose': -9, 'seed':3})
    surrogate = fm.SurrogatePopulation(fitfun)
    while not es.stop():
        X = es.ask()
        es.tell(X, surrogate(X))  # surrogate evaluation
        es.inject([surrogate.model.xopt])
        # es.disp(); es.logger.add()  # ineffective with verbose=-9
    print(fitfun.evaluations)  # was: (sig=2.2) 12 161, 18 131, 18 150, 18 82, 15 59, 15 87, 15 132, 18 83, 18 55, 18 68
    assert 'ftarget' in es.stop()

with two (somewhat) different Python installations and get

(py310) 13:51:46 0 4 src% python fix-test-difference.py  # oldish installation
Python 3.10.2 | packaged by conda-forge | (main, Mar  8 2022, 16:02:23) [Clang 11.1.0 ]
numpy 1.22.3
randn(seed=1) = 1.6243453636632417
18
68

(py3102np223) 15:46:41 0 29 src% python fix-test-difference.py  # installed right before the test in Jan 2023
Python 3.10.2 | packaged by conda-forge | (main, Mar  8 2022, 16:02:23) [Clang 11.1.0 ]
numpy 1.22.3
randn(seed=1) = 1.6243453636632417
18
291

The latter result I get with any new Python installation, irrespectively of the version (and I see 54 instead of 291 with seed 4). I get the same when ensuring that scipy=1.7.3 in both cases.

  • Where does the difference come from?
  • How can we make the test robust to this change of installation?