Developing a Naive Bayes Text Classifier in JAVA

NaiveBayes-JAVAIn previous articles we have discussed the theoretical background of Naive Bayes Text Classifier and the importance of using Feature Selection techniques in Text Classification. In this article, we are going to put everything together and build a simple implementation of the Naive Bayes text classification algorithm in JAVA. The code of the classifier is open-sourced (under GPL v3 license) and you can download it from Github.

Update: The Datumbox Machine Learning Framework is now open-source and free to download. Check out the package com.datumbox.framework.machinelearning.classification to see the implementation of Naive Bayes Classifier in Java.

Naive Bayes Java Implementation

The code is written in JAVA and can be downloaded directly from Github. It is licensed under GPLv3 so feel free to use it, modify it and redistribute it freely.

The Text Classifier implements the Multinomial Naive Bayes model along with the Chisquare Feature Selection algorithm. All the theoretical details of how both techniques work are covered in previous articles and detailed javadoc comments can be found on the source code describing the implementation. Thus in this segment I will focus on a high level description of the architecture of the classifier.

1. NaiveBayes Class

This is the main part of the Text Classifier. It implements methods such as train() and predict() which are responsible for training a classifier and using it for predictions. It should be noted that this class is also responsible for calling the appropriate external methods to preprocess and tokenize the document before training/prediction.

2. NaiveBayesKnowledgeBase Object

The output of training is a NaiveBayesKnowledgeBase Object which stores all the necessary information and probabilities that are used by the Naive Bayes Classifier.

3. Document Object

Both the training and the prediction texts in the implementation are internally stored as Document Objects. The Document Object stores all the tokens (words) of the document, their statistics and the target classification of the document.

4. FeatureStats Object

The FeatureStats Object stores several statistics that are generated during the Feature Extraction phase. Such statistics are the Joint counts of Features and Class (from which the joint probabilities and likelihoods are estimated), the Class counts (from which the priors are evaluated if none are given as input) and the total number of observations used for training.

5. FeatureExtraction Class

This is the class which is responsible for performing feature extraction. It should be noted that since this class calculates internally several of the statistics that are actually required by the classification algorithm in the later stage, all these stats are cached and returned in a FeatureStats Object to avoid their recalculation.

6. TextTokenizer Class

This is a simple text tokenization class, responsible for preprocessing, clearing and tokenizing the original texts and converting them into Document objects.

Using the NaiveBayes JAVA Class

In the NaiveBayesExample class you can find examples of using the NaiveBayes Class. The target of the sample code is to present an example which trains a simple Naive Bayes Classifier in order to detect the Language of a text. To train the classifier, initially we provide the paths of the training datasets in a HashMap and then we load their contents.

   //map of dataset files
   Map<String, URL> trainingFiles = new HashMap<>();
   trainingFiles.put("English", NaiveBayesExample.class.getResource("/datasets/training.language.en.txt"));
   trainingFiles.put("French", NaiveBayesExample.class.getResource("/datasets/"));
   trainingFiles.put("German", NaiveBayesExample.class.getResource("/datasets/"));

   //loading examples in memory
   Map<String, String[]> trainingExamples = new HashMap<>();
   for(Map.Entry<String, URL> entry : trainingFiles.entrySet()) {
      trainingExamples.put(entry.getKey(), readLines(entry.getValue()));

The NaiveBayes classifier is trained by passing to it the data. Once the training is completed the NaiveBayesKnowledgeBase Object is stored for later use.

   //train classifier
   NaiveBayes nb = new NaiveBayes();
   nb.setChisquareCriticalValue(6.63); //0.01 pvalue
   //get trained classifier
   NaiveBayesKnowledgeBase knowledgeBase = nb.getKnowledgeBase();

Finally to use the classifier and predict the classes of new examples all you need to do is initialize a new classifier by passing the NaiveBayesKnowledgeBase Object which you acquired earlier by training. Then by calling simply the predict() method you get the predicted class of the document.

   //Test classifier
   nb = new NaiveBayes(knowledgeBase);
   String exampleEn = "I am English";
   String outputEn = nb.predict(exampleEn);
   System.out.format("The sentense \"%s\" was classified as \"%s\".%n", exampleEn, outputEn);   

Necessary Expansions

The particular JAVA implementation should not be considered a complete ready to use solution for sophisticated text classification problems. Here are some of the important expansions that could be done:

1. Keyword Extraction:

Even though using single keywords can be sufficient for simple problems such as Language Detection, other more complicated problems require the extraction of n-grams. Thus one can either implement a more sophisticated text extraction algorithm by updating the TextTokenizer.extractKeywords() method or use Datumbox’s Keyword Extraction API function to get all the n-grams (keyword combinations) of the document.

2. Text Preprocessing:

Before using a classifier usually it is necessary to preprocess the document in order to remove unnecessary characters/parts. Even though the current implementation performs limited preprocessing by using the TextTokenizer.preprocess() method, when it comes to analyzing HTML pages things become trickier. One can simply trim out the HTML tags and keep only the plain text of the document or resort to more sophisticate Machine Learning techniques that detect the main text of the page and remove content which belongs to footer, headers, menus etc. For the later you can use Datumbox’s Text Extraction API function.

3. Additional Naive Bayes Models:

The current classifier implements the Multinomial Naive Bayes classifier, nevertheless as we discussed in a previous article about Sentiment Analysis, different classification problems require different models. In some a Binarized version of the algorithm would be more appropriate, while in others the Bernoulli Model will provide much better results. Use this implementation as a starting point and follow the instructions of the Naive Bayes Tutorial to expand the model.

4. Additional Feature Selection Methods:

This implementation uses the Chisquare feature selection algorithm to select the most appropriate features for the classification. As we saw in a previous article, the Chisquare feature selection method is a good technique which relays on statistics to select the appropriate features, nevertheless it tends to give higher scores on rare features that only appear in one of the categories. Improvements can be made removing noisy/rare features before proceeding to feature selection or by implementing additional methods such as the Mutual Information that we discussed on the aforementioned article.

5. Performance Optimization:

In the particular implementation it was important to improve the readability of the code rather than performing micro-optimizations on the code. Despite the fact that such optimizations make the code uglier and harder to read/maintain, they are often necessary since many loops in this algorithm are executed millions of times during training and testing. This implementation can be a great starting point for developing your own tuned version.

Almost there… Final Notes!

I-heard-hes-good-at-coding-lTo get a good understanding of how this implementation works you are strongly advised to read the two previous articles about Naive Bayes Classifier and Feature Selection. You will get insights on the theoretical background of the methods and it will make parts of the algorithm/code clearer.

We should note that Naive Bayes despite being an easy, fast and most of the times “quite accurate”, it is also “Naive” because it makes the assumption of conditional independence of the features. Since this assumption is almost never met in Text Classification problems, the Naive Bayes is almost never the best performing classifier. In Datumbox API, some expansions of the standard Naive Bayes classifier are used only for simple problems such as Language Detection. For more complicated text classification problems more advanced techniques such as the Max Entropy classifier are necessary.

If you use the implementation in an interesting project drop us a line and we will feature your project on our blog. Also if you like the article please take a moment and share it on Twitter or Facebook. 🙂


My name is Vasilis Vryniotis. I'm a Data Scientist, a Software Engineer, author of Datumbox Machine Learning Framework and a proud geek. Learn more

Latest Comments
  1. Vasilis Vryniotis

    Hi Glenn,

    Thanks for your comments. I’m not sure what you mean by semi-labeled Twitter messages. You can definitely use the Naive Bayes implementation as a basis for your code. As I point out on the article there are some changes that are required to make (such as the tokenization algorithm etc). I have more tips on Sentiment Analysis projects on this blog. I will not provide you with links because basically in most articles I discuss document classification and sentiment analysis. Just check my previous posts.

    Good luck with your thesis! 🙂

  2. Vasilis Vryniotis

    Hi Neenu,

    You need to consider what are your features. Which of them you will keep. Are they continuous or discrete? If they are discrete an approach as the one that I follow here would be sufficient (you can even use a large part of the code). If they are continuous you need to either fit a distribution on them and make your calculations or use a non-parametric method and build the histogram of your values. It heavily depends on what type of data you have.

    If I were you I would search on Google Scholar to see how researchers tried to tackle the problem, what are the state of the art methods, the pitfalls and the best practices etc. Note that if your features are highly correlated using Naive Bayes is not an appropriate technique because it assumes conditional independence.

    Good luck! 🙂

  3. Vasilis Vryniotis

    Hey Evis,

    Thanks for the comment. Sure, feature selection is an important part. You are not forced to do particularly chi-square; you can select some other test/technique though.

  4. Vasilis Vryniotis

    Hey, I advise you to use the Datumbox Machine Learning Framework which supports several different versions of Naive Bayes classifier and it guaranteed to be bug free. 🙂

  5. Alex

    Hello Vasilis,

    I was wondering if there was an easy way to save off a trained classifier for reuse in Datumbox. Thanks!


    • Vasilis Vryniotis

      Hi Alex,

      Yes of course. Once you train a classifier, the results are stored automatically. You define the location on your properties file.


  6. Namrata

    I was wondering, can I use this algorithm for purpose other than text mining, like some prediction.


  7. ilias

    Hello and congrats for your work! it’s great…
    I was wondering if we could somehow save the model that we have already trained… if so, could you provide some instructions to do so?
    thank you in advance!

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha * Time limit is exhausted. Please reload the CAPTCHA.