WUT-AI / hypersound

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HyperSound

Source code for paper "Hypernetworks build Implicit Neural Representations of Sounds". (arxiv)

Setup

Setup conda environment:

conda env create -f environment.yml

Populate .env file with settings from .env.example, e.g.:

DATA_DIR=~/datasets
RESULTS_DIR=~/results
WANDB_ENTITY=hypersound
WANDB_PROJECT=hypersound

Make sure that pytorch-yard is using the appropriate version (defined in train.py). If not, then correct package version with something like:

pip install --force-reinstall pytorch-yard==2022.9.1

Experiments

Default experiment:

python train.py

Custom settings:

python train.py cfg.learning_rate=0.01 cfg.pl.max_epochs=100

Isolated training of a target network on a single recording:

python train_inr.py

About


Languages

Language:Python 99.6%Language:Makefile 0.3%Language:Shell 0.2%