google-research / robotics_transformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Need Universal Sentence Encoder model for natural language embedding

destroy314 opened this issue · comments

The policy takes a 512-d natural_language_embedding as input. Can I just load it from TF Hub (https://tfhub.dev/google/universal-sentence-encoder/4) and embed my sentence, or would you please share the model checkpoint you have used?

Hi, Did you find any workaround for this?

I've confirmed the USE encoder on TF Hub @ https://tfhub.dev/google/universal-sentence-encoder/4 does not generate the same natural_language_embedding as in the datasets used (https://docs.google.com/spreadsheets/d/1rPBD77tk60AEIGZrGSODwyyzs5FgCU9Uz3h-3_t2A9g/edit#gid=0). Can anyone share how to generate these embeddings for use with this project? Thank you!

After checking the example for RT1-X, it uses the USE encoder (https://tfhub.dev/google/universal-sentence-encoder-large/5), which generates the same embeddings as that in RT1 dataset.