satwikkottur / VisualWord2Vec

Learning visually grounded word embeddings using Abstract scenes

Home Page:http://satwikkottur.github.io/VisualWord2Vec/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Choosing C for hybrid model

satwikkottur opened this issue · comments

Hi @ramakrishnavedantam928,

Consider these lines in original code, the choice of C is done based on validation. However, I get the following validation mean accuracies. It might be inconclusive as to how C was chosen.

Val (visual+textual) 0.001000 : 0.686750
Val (visual+textual) 0.010000 : 0.687300
Val (visual+textual) 0.100000 : 0.687288
Val (visual+textual) 1.000000 : 0.689349
Val (visual+textual) 10.000000 : 0.700442
Val (visual+textual) 100.000000 : 0.729081
Val (visual+textual) 1000.000000 : 0.745161
Val (visual+textual) 10000.000000 : 0.747387
Val (visual+textual) 100000.000000 : 0.747454
Val (visual+textual) 1000000.000000 : 0.746684

Just for fun, test accuracies:
Test (visual+textual) : 0.680495
Test (visual+textual) : 0.680593
Test (visual+textual) : 0.680665
Test (visual+textual) : 0.682826
Test (visual+textual) : 0.694478
Test (visual+textual) : 0.720002
Test (visual+textual) : 0.731483
Test (visual+textual) : 0.732650
Test (visual+textual) : 0.732648
Test (visual+textual) : 0.732845