markostam / histwords

Collection of tools for building diachronic/historical word vectors

Home Page:http://nlp.stanford.edu/projects/histwords/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

#Word Embeddings for Historical Text

Author: William Hamilton (wleif@stanford.edu)

Overview

An eclectic collection of tools for analyzing historical language change using vector space semantics.

alt text

Pre-trained historical embeddings

Various embeddings (for many languages and using different embeddings approaches are available on the project website.

Some pre-trained word2vec (i.e., SGNS) historical word vectors for multiple languages (constructed via Google N-grams) are also available here:

All except Chinese contain embeddings for the decades in the range 1800s-1990s (2000s are excluded because of sampling changes in the N-grams corpus). The Chinese data starts in 1950.

Embeddings constructed using the Corpus of Historical American English (COHA) are also available:

example.sh contains an example run, showing how to download and use the embeddings. example.py shows how to use the vector representations in the Python code (assuming you have already run the example.sh script.)

This paper describes how the embeddings were constructed. If you make use of these embeddings in your research, please cite the following:

@inproceedings{hamilton_diachronic_2016, title = {Diachronic {Word} {Embeddings} {Reveal} {Statistical} {Laws} of {Semantic} {Change}}, url = {http://arxiv.org/abs/1605.09096}, booktitle = {Proc. {Assoc}. {Comput}. {Ling}. ({ACL})}, author = {Hamilton, William L. and Leskovec, Jure and Jurafsky, Dan}, year = {2016} }

Code organization

The structure of the code (in terms of folder organization) is as follows:

Main folder for using historical embeddings:

Folders with pre-processing code and active research code (potentially unstable):

example.py shows how to compute the simlarity series for two words over time, which is how we evaluated different methods against the attested semantic shifts listed in our paper.

If you want to learn historical embeddings for new data, the code in the sgns directory is recommended, which can be run with the default settings. As long as your corpora has at least 100 million words per time-period, this is the best method. For smaller corpora, using the representations/ppmigen.py code followed by the vecanalysis/makelowdim.py code (to learn SVD embeddings) is recommended. In either case, the vecanalysis/seq_procrustes.py code should be used to align the learned embeddings. The default hyperparameters should suffice for most use cases.

Dependencies

Core dependencies:

You will also need Juptyer/IPython to run any IPython notebooks.

About

Collection of tools for building diachronic/historical word vectors

http://nlp.stanford.edu/projects/histwords/

License:Apache License 2.0


Languages

Language:Python 93.6%Language:Shell 5.2%Language:Makefile 0.7%Language:C 0.5%