ltgoslo / factorizer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to train

jcuenod opened this issue · comments

Hi, thanks for sharing the code for both de/encoding and training. Could you put up a readme for training on new data? I would like to try this out on Greek and Hebrew text.

Hi, thanks for your interest! I'm not sure if I'll have time to write a comprehensive training readme, but I'm happy to help you with training on these new languages! Please let me know here or on davisamu@ifi.uio.no if you have any issues.

The first thing you will need is a word-frequency list for each language. The Dataset class expect a tab-separated file with words sorted by frequency (f"{word}\t{frequency}"). Specifically, you should create these three files:

  • f"data/{language}_train_word_freq.tsv" for training
  • f"data/{language}_valid_word_freq.tsv" with unseen words for validation
  • f"data/{language}_frequent_word_freq.tsv" with the most common (seen) words for validation

Evaluation of these models is not easy (without training an expensive language model), but the two validation files are at least somewhat useful for sanity checking the training.

Thanks! I'll give it a go when I have a chance, and email you if I get stuck :)

Hi @davda54 it appears there are missing dependencies for the following imports in vq-vae/train.py

from lazy_adam import LazyAdamW
from random_sampler import WeightedRandomSampler

Update: I was able to run train.py and get a model. How can we now convert this model to a .dawg file used in example code? @davda54

@davda54 reminder here a few months later. Could you please help us convert the trained model into a .dawg file?

Just saw this comment, did you manage to make it run with the build.py file?