CUNY-CL / yoyodyne

Small-vocabulary sequence-to-sequence generation with optional feature conditioning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Generalization of `expert` `teacher_forcing` and `monotonicity` across model architectures

bonham79 opened this issue · comments

Something I've been thinking about with expansion of library: a decent amount of the work we've been using involves application of inductive biases and teacher-prompted training to model architecture. Currently we have:

  • Teacher-student forcing: lstms and transformers
  • Expert curricular training: edit action transducer
  • Monotonicity: hard attention lstm
  • Hard alignment: also hard attention lstm

One thing I would like to do with the next overhaul is modularize these beyond their respective models (like we're trying to do with #77 for teacher forcing) so that they can be 'dropped in' wherever. This would allow 'fun' combinations such as:

  • Feature-invariant transformer with monotonic assumptions and hard alignment
  • Hard Attention Transducer using SED alignments as an curricular guide.

A lot of these things won't necessarily click, but I believe adding this new modularity layer would allow easier use of curricular learning and exploration scheduling that isn't easy to implement in other libraries. Expanding utility.

(This is down the roads thought. Post-beta.)

commented

Without thinking through how these combinations would work too much, this sounds exciting and like a good idea! I am on board.

Yeah that sounds like a Johns Hopkins PhD dissertation ;)

Yeah that sounds like a Johns Hopkins PhD dissertation ;)

Am I missing a reference for the JHU?

Am I missing a reference for the JHU?

no it just used to be the home of this sort of thing