MisterZurg / ITMO_Evolutionary_Computing

🧬 Labs from ITMO; Dis - EC

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Note

Here are our labs from ITMO Third semester

Discipline

Evolutionary Computing

Instructors

Mikhail Melnik Associate Professor of Digital Transformation

Labs

Lab 1 Introduction to evolutionary computation

Goal is to edit Genetic Algorithm setup in such a way that the algorithm would be able to solve classic optimization function (shifted reversed Rastrigin function). You need to set dimension to 100. Don't forget this. This will increase complexity of function. The maximum value that you can achieve is 10.0, but lets say you want to achieve results with quality > 9.5.

Lab 2 Designing an evolutionary algorithm for the queen placement problem

Implement solving of Queens Puzzle with evolutionary algorithm. For example, genetic algorithm. The goal is to design algorithm that would solve queens puzzle with at least 8x8 board. But you can try bigger sizes. As a pattern you can use the same script as for 1st assignment, but modify all required things to work with discrete problem. It is > important to design how you will encode solution (how you represent a solution in a code) and design correspond mutation\crossover operators. Try to print in console your best found solution at the end of your script.

Lab 3 CartPole Left Right.

You need to change classic CartPole environment in such a wat that it will have a goal (0 or 1) and Cart should balance the Pole in correspond sides of platform (0 - left, 1 - right). The goal implementation is already provided for you. The main goal is to implement reward function that will helps agent to do this.

  1. You can find a template project with scripts for :
    • CartPoleEnv with implemented goal (left or right)
    • Train script with RLLib
    • Script for replaying rllib checkpoint
  2. You need to implement reward function in CartPoleEnv script in such a way that your Cart will balance Pole on specific side of platform depending on the goal. For example if goal is 0 then Pole must be balances on left side. There is no hard restirictions for how far it should be, but far is better (in range -2.4 to 2.4)
  3. You need to train it with RLLib (template is given) and send me a simple report with plots from tensorboard (generated by RLLib) and video with your result. (Demo is attached here).

Refs

About

🧬 Labs from ITMO; Dis - EC


Languages

Language:Jupyter Notebook 100.0%