Out-of-the-box multilingual sentence embeddings.
laserembeddings is a pip-packaged, production-ready port of Facebook Research's LASER (Language-Agnostic SEntence Representations) to compute multilingual sentence embeddings.
β¨ Version 1.0.1 is here! What's new?
- The encoder was fixed to remove an innocuous warning message that would sometimes appear when using PyTorch 1.4 π
- Japanese extra is now disabled on Windows (sorry) to prevent installation issues and computation failures in other languages π
LASER is a collection of scripts and models created by Facebook Research to compute multilingual sentence embeddings for zero-shot cross-lingual transfer.
What does it mean? LASER is able to transform sentences into language-independent vectors. Similar sentences get mapped to close vectors (in terms of cosine distance), regardless of the input language.
That is great, especially if you don't have training sets for the language(s) you want to process: you can build a classifier on top of LASER embeddings, train it on whatever language(s) you have in your training data, and let it classify texts in any language.
The aim of the package is to make LASER as easy-to-use and easy-to-deploy as possible: zero-config, production-ready, etc., just a two-liner to install.
π π π For detailed information, have a look at the amazing LASER repository, read its presentation article and its research paper. π π π
You'll need Python 3.6 or higher.
pip install laserembeddings
To install laserembeddings with extra dependencies:
# if you need Chinese support:
pip install laserembeddings[zh]
# if you need Japanese support (not available on Windows):
pip install laserembeddings[ja]
# or both:
pip install laserembeddings[zh,ja]
python -m laserembeddings download-models
This will download the models to the default data
directory next to the source code of the package. Use python -m laserembeddings download-models path/to/model/directory
to download the models to a specific location.
from laserembeddings import Laser
laser = Laser()
# if all sentences are in the same language:
embeddings = laser.embed_sentences(
['let your neural network be polyglot',
'use multilingual embeddings!'],
lang='en') # lang is only used for tokenization
# embeddings is a N*1024 (N = number of sentences) NumPy array
If the sentences are not in the same language, you can pass a list of language codes:
embeddings = laser.embed_sentences(
['I love pasta.',
"J'adore les pΓ’tes.",
'Ich liebe Pasta.'],
lang=['en', 'fr', 'de'])
If you downloaded the models into a specific directory:
from laserembeddings import Laser
path_to_bpe_codes = ...
path_to_bpe_vocab = ...
path_to_encoder = ...
laser = Laser(path_to_bpe_codes, path_to_bpe_vocab, path_to_encoder)
# you can also supply file objects instead of file paths
If you want to pull the models from S3:
from io import BytesIO, StringIO
from laserembeddings import Laser
import boto3
s3 = boto3.resource('s3')
MODELS_BUCKET = ...
f_bpe_codes = StringIO(s3.Object(MODELS_BUCKET, 'path_to_bpe_codes.fcodes').get()['Body'].read().decode('utf-8'))
f_bpe_vocab = StringIO(s3.Object(MODELS_BUCKET, 'path_to_bpe_vocabulary.fvocab').get()['Body'].read().decode('utf-8'))
f_encoder = BytesIO(s3.Object(MODELS_BUCKET, 'path_to_encoder.pt').get()['Body'].read())
laser = Laser(f_bpe_codes, f_bpe_vocab, f_encoder)
Some dependencies of the original project have been replaced with pure-python dependencies, to make this package easy to install and deploy.
Here's a summary of the differences:
Part of the pipeline | LASER dependency (original project) | laserembeddings dependency (this package) | Reason |
---|---|---|---|
Normalization / tokenization | Moses | Sacremoses | Moses is implemented in Perl |
BPE encoding | fastBPE | subword-nmt | fastBPE cannot be installed via pip and requires compiling C++ code |
Japanese segmentation (optional) | MeCab / JapaneseTokenizer | mecab-python3 | mecab-python3 comes with wheels for major platforms (no compilation needed) |
For most languages, in most of the cases, yes.
Some slight (and not so slight π) differences exist for some languages due to differences in the implementation of the Tokenizer.
An exhaustive comparison of the embeddings generated with LASER and laserembeddings is automatically generated and will be updated for each new release.
How can I train the encoder?
You can't. LASER models are pre-trained and do not need to be fine-tuned. The embeddings are generic and perform well without fine-tuning. See facebookresearch/LASER#3 (comment).
Thanks a lot to the creators of LASER for open-sourcing the code of LASER and releasing the pre-trained models. All the kudos should go to them π.
A big thanks to the creators of Sacremoses and Subword Neural Machine Translation for their great packages.
The first thing you'll need is Poetry. Please refer to the installation guidelines.
Clone this repository and install the project:
poetry install
To run the tests:
poetry run pytest
First, install the project with the extra dependencies (Chinese and Japanese support):
poetry install -E zh -E ja
Then, download the test data:
poetry run python -m laserembeddings download-test-data
π If you want to know more about the contents and the generation of the test data, check out the laserembeddings-test-data repository.
Then, run the test with SIMILARITY_TEST
env. variable set to 1
.
SIMILARITY_TEST=1 poetry run pytest tests/test_laser.py
Now, have a coffee βοΈ and wait for the test to finish.
The similarity report will be generated here: tests/report/comparison-with-LASER.md.