snowleopard / tuura

A platform for collaborative evaluation of algorithms.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tuura

Tuura (Kyrgyz: ту́урa, which means correct) is a set of open source software tools to automate collaborative testing of correctness and performance of algorithms. Testing is conducted in the form of Tuura tournaments (or tuurnaments) — algorithmic competitions, whose participants compete against each other in problem solving and testing skills.

This project has two primary goals: to provide a platform for organisation of online algorithmic contests, and to provide a way to collaboratively test and compare different solutions of an algorithmic problem.

Rules

The rules are not defined completely yet and can be changed later; please see the project forum for current discussions. The following summarises the main idea behind the project.

Writing a problem

Any participant can write a problem for a tuurnament. A problem description consists of:

  • a problem statement specifying a particular algorithmic task which has to be solved;
  • a set of tests, input files which have to be processed by a solution (optional);
  • a test validator, a program checking correctness of a test;
  • a reference solution to the problem, which is used to verify correctness of output files generated by all other solutions.

Solving a problem

Any participant can write a solution for a problem in any supported programming language. The solution is accepted as correct (tuura!) if it passes all currently available tests; if it fails even a single test, the solution is rejected (tuura emes, which means incorrect). A solution passes a test if it generates a correct output file within a time limit specified in the problem statement.

Accepted solutions are ranked according to the total runtime spent on solving all tests.

Testing solutions

Any participant can add a test to the shared pool of tests available for a problem. The test is validated by the test validator provided by the problem's author. All previously accepted solutions are run on the new test and either remain accepted (their total runtimes are then increased accordingly) or become rejected.

Tests are ranked according to the number of solutions they fail.

About

A platform for collaborative evaluation of algorithms.