This is the main repository of the optimizationBenchmarking.org tool suite. The optimizationBenchmarking.org tool suite supports researchers in evaluating and comparing the performance of (anytime) optimization algorithms, such as [Local Search] (http://en.wikipedia.org/wiki/Local_search_%28optimization%29), Evolutionary Algorithms, Swarm Intelligence methods, Branch and Bound, and virtually all other metaheuristics.
Optimization algorithms are algorithms which can find (approximate) solutions for computationally hard (e.g., NP-hard) problems, such as the Traveling Salesman Problem, the Maximum Satisfiability Problem, or the Bin Packing Problem. For this kind of problems, solvers cannot guarantee to always find the globally best possible solution within feasible time. In order to solve these problems, solution quality has to be traded in for shorter runtime.
Anytime optimization algorithms do this by starting with a more or less random (and hence usually bad) approximation of the solution and improve this approximation during their course. Comparing two such algorithms is not an easy thing, since it involves comparing behavior over runtime.
In this project, we try to provide a set of tools to make this process easier.