Sentiment Analysis with the Naive Bayes Classifier

Posted on Posted in Machine Learning, Sentiment Analytics

From the introductionary blog we know that the Naive Bayes Classifier is based on the bag-of-words model.

With the bag-of-words model we check which word of the text-document appears in a positive-words-list or a negative-words-list. If the word appears in a positive-words-list the total score of the text is updated with +1 and vice versa. If at the end the total score is positive, the text is classified as positive and if it is negative, the text is classified as negative. Simple enough!

With the Naive Bayes model, we do not take only a small set of positive and negative words into account, but all words the NB Classifier was trained with, i.e. all words presents in the training set. If a word has not appeared in the training set, we have no data available and apply Laplacian smoothing (use 1 instead of the conditional probability of the word).
The probability a document belongs to a class C is given by the class probability  P(C) multiplied by the products of the conditional probabilities of each word  for that class.

P = P(C) \cdot \prod_i P(d_i|C) = P(C) \cdot \prod_i^n \frac{count(d_i, C)}{\sum_i count(d_i, C)} = P(C) \cdot \prod_i \frac{count(d_i, C)}{V_C}


Here count(d_i,C) is the number of occurences of word d_i in class C , V_C is the total number of words in class C and n is the number of words in the document we are currently classifying.
V_C does not change (unless the training set is expanded), so it can be placed outside of the product:

P = \frac{P(C)}{V_C^n} \cdot \prod_i^n count(d_i, C)


Handling large training sets:

In theory we want a training set as large as possible, since that will increase the accuracy. In practice this results in large numbers for V_C and n. For example, for our training set with 5000 reviews we got
V_{neg} = 90.346
V_{neu} = 109.287
V_{pos} = 459.582

Taking the n-th power of such a large number, will definitely result in computational problems, so we should normalize it. We can divide it by a number so that it becomes a number close to 1. In our case, this number is 100.000 and this normalization results in:
V_{neg,norm} = 0,90346
V_{neu,norm} = 1,09287
V_{pos,norm} = 4,59582


However, if the number of words in the document n is large, this can still lead to computational problems:


In Python there are a few modules which can handle large number (like Decimal), but a better solution would be to take the logarithm. This will not affect the outcome of the classification process; if a document has the highest probability for a specific class, the logarithm of the probabilities will also be the highest for that class.

This results in:

log(P) = log( \frac{P(C)}{V_C^n} \cdot \prod_i^n count(d_i, C) )
= log(\frac{P(C)}{V_C^n}) + log(\prod_i^n count(d_i, C))
= log(P(C)) - n log(V_C) + \sum log(count(d_i, C))



Implementing Naive Bayes

With this information it is easy to implement a Naive Bayes Classifier algorithm.
Our training set consists in the form of a list of tuples, where each tuple contains two elements; the tokenized text and the label.


The training of the Naive Bayes Classifier is done by iterating through all of the documents in the training set, keeping track of the number of occurences of each word per class. Of course, we can exclude stopwords and include n-gram features if these options are chosen during the training process(see previous post). Many different data containers can be chosen to keep track of the number of occurences. Within Python a DataFrame is very useful for this purpose. The advantage of a DataFrame is that it is easy to save it (.to_csv) once the training is done. At a later time it can be loaded again with .read_csv. The code to train a NB Classifier looks like:



People who are already familiar with pandas DataFrame will know that it is going to look something like this:


The DataFrame containing the number of occurences of each word from the training set, is actually all of the training our model needs. With this DataFrame df, the algorithm for the Naive Bayes looks like:


As we can see, the Naive Bayes Classifier is easy to implement. Its algorithm is given by lines 11 to 30.
The most probable class is given by the key with the maximum value in the dictionary current_class_prob.


Next blog:

In the next blog we will look at the results of this naively implemented algorithm for the Naive Bayes Classifier and see how it performs under various conditions; we will see the influence of varying training set sizes and whether the use of n-gram features will improve the accuracy of the classifier.

Share This:

6 thoughts on “Sentiment Analysis with the Naive Bayes Classifier

Geef een reactie

Het e-mailadres wordt niet gepubliceerd. Verplichte velden zijn gemarkeerd met *