mansilla / spanish-word-embeddings

Spanish word embeddings computed with different methods and from different corpora

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Spanish Word Embeddings

Below you find links to Spanish word embeddings computed with different methods and from different corpora. Whenever it is possible, a description of the parameters used to compute the embeddings is included, together with simple statistics of the vectors, vocabulary, and description of the corpus from which the embeddings were computed. Direct links to the embeddings are provided, so please refer to the original sources for proper citation. (An example of the use of some of these embeddings can be found here.)

FastText embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855380):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

  • Spanish Billion Word Corpus
  • Corpus Size: 1.4 billion words
  • Post processing: Besides the post processing of the raw corpus explained in the SBWCE page that included deletion of punctuation, numbers, etc., the following processing was applied:
    • Words were converted to lower case letters
    • Every sequence of the 'DIGITO' keyword was replaced by (a single) '0'
    • All words of more than 3 characteres plus a '0' were ommitted (example: 'padre0')

Reference

Word embeddings were computed by Jorge Pérez. You can use these vectors as you wish under the CC-BY-4.0 license. You may also want to cite the FastText paper Enriching Word Vectors with Subword Information and the Spanish Billion Word Corpus project.

GloVe embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855380):

Algorithm

  • Implementation: GloVe
  • Parameters:
    • vector-size = 300
    • iter = 25
    • min-count = 5
    • all other parameters set as default

Corpus

Reference

Word embeddings were computed by Jorge Pérez. You can use these vectors as you wish under the CC-BY-4.0 license. You may also want to cite the GloVe paper GloVe: Global Vectors for Word Representation and the Spanish Billion Word Corpus project.

FastText embeddings from Spanish Wikipedia

Embeddings

Links to the embeddings (#dimensions=300, #vectors=985667):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters: FastText default parameters

Corpus

Reference

Word embeddings were computed by FastText team. Please refer to FastText Pre-trained Vectors page if you want to use these vectors.

Word2Vec embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1000653)

Algorithm

Corpus

Reference

Word embeddings were computed by Cristian Cardellino. Please refer to the SBWCE page if you want to use these vectors.

About

Spanish word embeddings computed with different methods and from different corpora

License:Other