Word embeddings, such as Word2Vec or Glove, are vector representations that capture lexical-semantic properties of words. They constitute a practical way for transferring knowledge between two machine learning models, and they contribute to greatly reducing the learning time required for solving various NLP tasks. There is great practical interest in experimenting with different word embedding models. Neural-based models, due to their flexibility, are a great framework for that experimentation. However, that very same flexibility also brings many degrees of freedom to the experimentation, which end up becoming a challenge in itself. In this talk, we will present Syntagma, a python toolkit (still under development) that enables rapid experimentation of neural word embedding models. We will present preliminary results of experimenting with some of the hyper-parameters of a baseline word embedding model (similar to Word2Vec), and we will discuss the next steps for Syntagma.
Going Neurotic With Neural Word Embeddings… again!
July 18, 2017
1:00 pm