mollenburger / ConVec

In this project, we use skip-gram model to embed Wikipedia Concepts and Entities. The English version of Wikipedia contains more than five million pages, which suggest its capability to cover many English Entities, Phrases, and Concepts. Each Wikipedia page is considered as a concept.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ConVec: Vector Embedding of Wikipedia Concepts and Entities

WikipediaParser folder contains the code to extract and prepare the Wikipedia dump.

Please find in the following, link the pre-traind Concept, Word and Entitie vectors (as a result of this project):

Please cite to the following paper if you used the code, datasets and vector embedings:

@misc{NLDBSherkat2017,
   Author = {Ehsan Sherkat, Evangelos Milios},
   Title  = {Vector Embedding of Wikipedia Concepts and Entities},
   Year   = {2017},
   link   = {https://arxiv.org/abs/1702.03470}
   Url    = {https://doi.org/10.1007/978-3-319-59569-6_50}
   doi    = {10.1007/978-3-319-59569-6_50}
   booktitle= {Natural Language Processing and Information Systems, NLDB 2017}
}

About

In this project, we use skip-gram model to embed Wikipedia Concepts and Entities. The English version of Wikipedia contains more than five million pages, which suggest its capability to cover many English Entities, Phrases, and Concepts. Each Wikipedia page is considered as a concept.

License:Other


Languages

Language:Python 100.0%