Advanced Feature Extraction methods-Word2Vec

Till now, we have seen some methods like BOW/TFIDF to extract features from the sentence but, these are very sparse in nature. In this tutorial, we will try to explore word vectors this gives a dense vector for each word.

There are many ways to get the dense vector representation for the words. below are some of them

Co-occurrence Matrix and SVD

Please refer one of simple blog for this

Word2Vec

I think, there are many articles and videos regarding the Mathematics and Theory of Word2Vec. So, i am giving some links to explore and i will try to explain code to train the custom Word2Vec. Please check some resources below.

Word2Vec videos

Please watch those videos or read above blog before going into the coding part.

Word2Vec using Gensim

We can train word2vec using gensim module with CBOW or Skip-Gram ( Hierarchical Softmax/Negative Sampling). It is one of the efficient ways to train word vectors. I am training word vectors using gensim, using IMDB reviews as a data corpus to train. In this, I am not training the best word vectors, only training for 10 iterations.

data_imdb is cleaned data frame that contains review as a column. Got the data from this link.

review

sentiment

0

one of the other reviewers has mentioned that ...

positive

1

a wonderful little production. the filming tec...

positive

To train gensim word2vec module, we can give list of sentences or a file a corpus file in LineSentence format. Here i am creating list of sentences from my corpus. If you have huge data, please try to use LineSentence format to efficiently train your word vectors.

Training gensim word2vec as below

You can get word vectors as below

You can get most similar positive words for any given word as below

You can save your model as below

You can get the total notebook in below GitHub

Word2Vec using Tensorflow ( Skip-Gram, Negative Sampling)

In the negative sampling, we will get a positive pair of skip-grams and for every positive pair, we will generate n number of negative pairs. I used only 10 negative pairs. In the paper, they suggesting around 25. Now we will use these positive and negative pairs and try to create a classifier that differentiates both positive and negative samples. While doing this, we will learn the word vectors.

We have to train a classifier that differentiates positive sample and negative samples, while doing this we will learn the word embedding. Classifier looks like below image

Word2Vec using NS

The above model takes two inputs center word, context word and, model output is one if those two words occur within a window size else zero. You can find the theory behind this in the below video or you can read the blog link given above.

Preparing the data

We have to generate the skip-gram pairs and negative samples. We can do that easily using tf.keras.preprocessing.sequence.skipgrams. This also takes a probability table(sampling table), in which we can give the probability of that word to utilize in the negative samples i.e. we can make probability low for the most frequent words and high probability for the least frequent words while generating negative samples.

Converted total words into the number sequence. Numbers are given in descending order of frequency.

If we create total samples at once, it may take so much RAM and that gives the resource exhaust error. so created a generator function which generates the values batchwise.

Creating Model

Written the model as below,

Training

You can check total code and results in my GitHub link below.

Saved the model into gensim Word2Vec format and loaded

Checked the positive words as below

It was giving some better results but not great. We have to train more and with more negative samples too.

Word2Vec using Tensorflow (Skip-Gram, NCE)

Let's take a which gives the score to each pair of the skipgrams, we will try to maximize the (score of positive pairs to the word - score of negative pairs) to the word. We can do that directly by optimizing the tf.nn.nce_loss. Please try to read the documentation. It takes a positive pair, weight vectors and then generates the negative pairs based on sampled_values and gives the loss.

Preparing the Data

We have to generate a positive pair of skip-grams, we can do it in a similar way as above. Created a pipeline to generate batchwise data as below.

Creating Model

I created a model word2vecNCS which takes a center word, context word and give NCE loss. You can check that below.

Training

You can check total code and results in my GitHub link below.

Checked the positive words as below

Fast-text Embedding (Sub-Word Embedding)

Instead of feeding individual words into the Neural Network, FastText breaks words into several n-grams (sub-words). For instance, tri-grams for the word where is <wh, whe, her, ere, re> and the special sequence <where>. Note that the sequence , corresponding to the word her is different from the tri-gram her from the word where. Because of these subwords, we can get embedding for any word we have even it is a misspelled word. Try to read the this paper.

We can train these vectors using the gensim or fastText official implementation. Trained fastText word embedding with gensim, you can check that below. Its a single line of code similar to Word2vec.

You can get the total code in the below GitHub

Pre-Trained Word Embedding

We can get pretrained word embedding that was trained on huge data by Google, stanford NLP, facebook.

Google Word2Vec

You can download google's pretrained wordvectors trained on Google news data from this link. You can load the vectors as gensim model like below

GloVe Pretrained Embeddings

You can download the glove embedding from this link. There are some differences between Google Word2vec save format and GloVe save format. We can convert Glove format to google format and then load that using gensim as below.

FastText Pretrained Embeddings

You can get the fasttext wordembeedings from this link. You can use fasttext python api or gensim to load the model. I am using gensim.

You can check the notebook with code in below GitHub link

References:

  1. gensim documentation

  2. CS7015 - IIT Madras

Last updated

Was this helpful?