Multi-Objective Self-Adaptive EA (MOSA-EA) (Lehre and Qin, 2022), was proposed to optimise single-objective Pseudo-Boolean functions, which treats parameter control from multi-objectivisation. The algorithm maximises the fitness and the mutation rates simultaneously, allowing individuals in "dense" fitness valleys and on "sparse" local optima to co-exist on a non-dominated Pareto front.
Please find more in the publication below.
-
Publication:
First install IOHprofiler via pip. IOHprofiler is a benchmarking platform for evaluating the performance of iterative optimisation heuristics.
pip install ioh
The arguments the python code:
-n
is problem size.
-f
is id of fitness function (A int value from 1 to 25. Here we use PBO problem set in IOHprofiler, for information of function ids please see IOHproblem).
-A
is A in MOSA-EA (A float value greater than 1; we recommend to set as1.01
).
-p
is pinc in MOSA-EA (A float value between 0 and 1; we recommend to set as0.4
).
-c
is minimal mutation rate parameter χmin in MOSA-EA (A float value greater than 0; we recommend to set as0.5/log(n)
under noise).
-l
is population size (λ) in MOSA-EA (A int value greater than 1; we recommend to set as5000 log(n)
).
-m
is μ in (μ,λ) selection in MOSA-EA (A int value greater than 1 and less than λ; we recommend to set asλ/8
).
-e
is the maximum of the number of evaluations.
-r
is the number of runs.
-log
is to turn on logger to record optimisation progress via IOHprofiler (1
is on and0
is off).
To run it:
cd mosaea-python-version
python mosa-ea.py -n 50 -f 2 -A 1.01 -p 0.4 -c 0.1 -l 5000 -m 625 -e 100000000 -r 1 -log 0
If you turn logger on, you can upload the data to IOHanalyzer to analyse the performance of MOSA-EA. The guide of IOHanalyzer is available here.
To use customised fitness function, please read here.
To compile it:
cd mosaea-cpp-version
make
The arguments in the C++ code:
-n
is problem size.
-f
is id of fitness function (1
is OneMax, and2
is LeadingOnes).
-A
is A in MOSA-EA (A float value greater than 1; we recommend to set as1.01
).
-p
is pinc in MOSA-EA (A float value between 0 and 1; we recommend to set as0.4
).
-c
is minimal mutation rate parameter χmin in MOSA-EA (A float value greater than 0; we recommend to set as0.5/log(n)
).
-l
is population size (λ) in MOSA-EA (A int value greater than 1; we recommend to set as5000 log(n)
).
-m
is μ in (μ,λ) selection in MOSA-EA (A int value greater than 1 and less than λ; we recommend to set asλ/8
).
-e
is the maximum of the number of evaluations.
To run it:
./mosa-ea -n 50 -f 2 -A 1.01 -p 0.4 -c 0.1 -l 5000 -m 625 -e 100000000
which is to optimise LeadingOnes (n=100) in 100000000 evaluations by using the recommanded parameter setting.
You also can add a customised fitness function by adding a global function and giving an id in Class Evaluation
. Please follow examples of OneMax and LeadingOnes in the code.
If you have any questions, comments or suggestions, please don't hesitate to contact us:
- Xiaoyu Qin, School of Computer Science, University of Birmingham.
- Per Kristian Lehre, School of Computer Science, University of Birmingham,
When using MOSA-EA and parts thereof, please kindly cite this work as
@inproceedings{mosa-ea-theory,
title = {Self-adaptation to Multi-objectivisation: A Theoretical Study},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
publisher = {ACM},
author = {Lehre, Per Kristian and Qin, Xiaoyu},
year = {2022},
}
@inproceedings{mosa-ea-empirical,
title = {Self-adaptation to Multi-objectivisation: An Empirical Study},
booktitle = {Proceedings of the Parallel Problem Solving from Nature},
publisher = {Springer},
author = {Qin, Xiaoyu and Lehre, Per Kristian},
year = {2022},
}
MOSA-EA
is released under the MIT license. See LICENSE for additional details about it.