Basic Feature Extraction Methods
Last updated
Last updated
It is a matrix with rows contains unique documents and the column contain the unique words/tokens. Let's take sample documents and store them in the sample_documents
.
In the above sample_documents
, we have 3 documents and 8 unique words. The Document Term matrix contains 3 rows and 8 columns as below.
There are many ways to determine the value in the above matrix. I will discuss some of the ways below.
In this, we will fill with the number of times that word occurred in the same document.
If you check the above matrix, "nlp
" occurred two times in the document-2 so value corresponding to that is 2. If it occurs n times
in the document, the value corresponding is n
. We can do the same in the using CountVectorizer
in sklearn
.
output:
How CountVectorizer
gets the unique words?
It first splits the documents into words and then it gets the unique words. CountVectorizer
uses token_pattern
or tokenizer
, we can give our custom tokenization
algorithm to get words from a sentence. Please try to read the documentation of the sklearn
to know more about it.
We can also get the ngram words as vocab. please check below code. That was written for unigrams and bi-grams.
N-grams are simply all combinations of adjacent words of length n that you can find in your source text.
In this, we will fill with TF*IDF
.
TF
of a word is only dependent on the particular document. It won't depend on the total corpus of documents. TF
value of word changes from document to document
IDF
of a word dependent on total corpus of documents. IDF
value of word is constant for total corpus.
You can think IDF
as information content of the word.
We can calculate the TFIDF
vectors using TfidfVectorizer
in sklearn
.
With the TfidfVectorizer
also we can get the ngrams and we can give our own tokenization
algorithm.
vocab
in our corpus? If we have many unique words, our BOW/TFIDF
vectors will be very high dimensional that may cause curse of dimensionality
problem. We can solve this with the below methods.
In CountVectorize
, we can do this using max_features
, min_df
, max_df
. You can use vocabulary
parameter to get specific words only. Try to read the documentation of CountVectorize to know better about those. You can check the sample code below.
You can do similar thing with TfidfVectorizer
with same parameters. Please read the documentation.
CountVectorizer
and TfidfVectorizer
If we have a large corpus, vocabulary will also be large and for fit
function, you have to get all documents into RAM. This may be impossible if you don't have sufficient RAM.
building the vocab
requires a full pass over the dataset hence it is not possible to fit text classifiers in a strictly online manner.
After the fit
, we have to store the vocab dict
, which takes so much memory. If we want to deploy in memory-constrained environments like amazon lambda, IoT devices, mobile devices, etc.., these may be not useful.
We can solve the first problem with an iterator over the total data and building the vocab
then, using that vocab, we can create the BOW matrix in the sparse format and then TFIDF
vectors using TfidfTransformer
. The sparse matrix won't take much space so, we can store the BOW sparse matrix in our RAM to create the TFIDF
sparse matrix.
I have written a sample code to do that for the same data. I have iterated over the data, created vocab, and using that vocab, created BOW. We can write a much more optimized version of the code, This is just a sample to show.
The above result is similar to the one we printed while doing the BOW, you can check that.
Using above BOW
sparse matrix and the TfidfTransformer
, we can create the TFIDF
vectors. you can check below code.
The above result is similar to the one we printed while doing the TFIDF,
you can check that.
Other than our own iterator/generator, if we have data in one file or multiple files, we can directly give input
parameter as file/filename
and while fit function, we can give file path
. Please read the documentation.
You can normalize your vectors using norm.
Since the hash function might cause collisions between (unrelated) features, a signed hash function is used and the sign of the hash value determines the sign of the value stored in the output matrix for a feature. This way, collisions are likely to cancel out rather than accumulate error, and the expected mean of any output feature’s value is zero. This mechanism is enabled by default with alternate_sign=True
and is particularly useful for small hash table sizes (n_features < 10000
).
We can convert above vector to TFIDF
using TfidfTransformer
. check the below code
This vectorizer is memory efficient but there are some cons for this as well, some of them are
There is no way to compute the inverse transform of the Hashing so there will be no interpretability
of the model.
There can be collisions in the hashing.
You can get total code written in this blog from below GitHub link
References:
Another way to solve all above problems are hashing. We can convert a word into fixed index number using the hash function. so, there will be no training process to get the vocabulary and no need to save the vocab. It was implemented in sklearn
with HashingVectorizer
. In HashingVectorizer
, you have to mention number of features you need, by default it takes . below you can see some code to use HashingVectorizer
.
DTM
awesome
basic
easy
is
NLP
notebook
the
this
Document-1
Document-2
Document-3
BOW
awesome
basic
easy
is
NLP
notebook
the
this
Document-1
0
0
0
1
1
1
1
1
Document-2
0
1
1
2
2
0
0
1
Document-3
1
0
0
1
1
0
0
0