Datumbox Machine Learning Framework version 0.8.0 released

Datumbox Framework v0.8.0 is out and packs several powerful features! This version brings new Preprocessing, Feature Selection and Model Selection algorithms, new powerful Storage Engines that give better control on how the Models and the Dataframes are saved/loaded, several pre-trained Machine Learning models and lots of memory & speed improvements. Download it now from Github or Maven Central Repository.

New Storage Engines

One of the main targets of version 0.8.0 was to improve the Storage mechanisms of the framework and make disk-based training available to all the supported algorithms. The new storage engines give better control over how and when the models are being persisted. One important change is that the models are not being stored automatically after the fit() method is finished but instead one needs to explicitly call the save() method providing the name of the model. This enables us not only to discard easier temporary algorithms without going through a serialization phase but also to save/load the Dataframes:

Configuration configuration = Configuration.getConfiguration();
Dataframe data = ...; //load a dataframe here

MaximumEntropy.TrainingParameters params = new MaximumEntropy.TrainingParameters();
MaximumEntropy model = MLBuilder.create(params, getConfiguration());
model.fit(data);
model.save("MyModel"); //save the model using the specific name
model.close();

data.save("MyData"); //save the data using a specific name
data.close();

data = Dataframe.Builder.load("MyData", configuration); //load the data
model = MLBuilder.load(MaximumEntropy.class, "MyModel", configuration); //load the model
model.predict(data);
model.delete(); //delete the model

Currently we support two storage engines: The InMemory engine which is very fast as it loads everything in memory and the MapDB engine which is slower but permits disk-based training. You can control which engine you use by changing your datumbox.configuration.properties or you can programmatically modify the configuration objects. Each engine has its own configuration file but again you can modify everything programmatically:

Configuration configuration = Configuration.getConfiguration(); //conf from properties file

configuration.setStorageConfiguration(new InMemoryConfiguration()); //use In-Memory engine
//configuration.setStorageConfiguration(new MapDBConfiguration()); //use MapDB engine

Please note that in both engines, there is a directory setting which controls where the models are being stored (inMemoryConfiguration.directory and mapDBConfiguration.directory properties in config files). Make sure you change them or else the models will be written on the temporary folder of your system. For more information on how you structure the configuration files have a look on the Code Example project.

Datumbox Zoo: Pre-trained models

With the new Storage mechanism in place, it is now feasible to share publicly pre-trained models that cover the areas of Sentiment Analysis, Spam Detection, Language Detection, Topic Classification and all the other models that are available via the Datumbox API. You can now download and use all the pre-trained models on your project without requiring calling the API and without being limited by the number of daily calls. Currently the published models are trained using the InMemory storage engine and they support only English. On future releases, I plan to provide support for more languages.

Code Improvements & new Algorithms

In the new framework, there are several changes on the public methods of many of the classes (hence it is not backwards compatible). The most notable difference is on the way the models are initialized. As we saw in the earlier code example, the models are not directly instantiated but instead the MLBuilder class is used to either create or load a model. The training parameters are provided directly to the builder and they can’t be changed with a setter.

Another improvement is on the way we perform Model Selection. The v0.8.0 introduces the new modelselection package which offers all the necessary tools for validating and measuring the performance of our models. In the metrics subpackage we provide the most important validation metrics for classification, clustering, regression and recommendation. Note that the ValidationMetrics are removed from each individual algorithm and they are no longer stored together with the model. The framework offers the new splitters subpackage which enables splitting the original dataset using different schemes. Currently K-fold splits are performed using the KFoldSplitter class while partitioning the dataset into a training and test set can be achieved with the ShuffleSplitter. Finally to quickly validate a model, the framework offers the Validator class. Here is how one can perform K-fold cross validation within a couple of lines of code:

ClassificationMetrics vm = new Validator<>(ClassificationMetrics.class, configuration)
    .validate(new KFoldSplitter(k).split(data), new MaximumEntropy.TrainingParameters());

The new Preprocessing package replaces the old Data Transformers and gives better control on how we scale and encode the data before the machine learning algorithms. The following algorithms are supported for scaling numerical variables: MinMaxScaler, StandardScaler, MaxAbsScaler and BinaryScaler. For encoding categorical variables into booleans you can use the following methods: OneHotEncoder and CornerConstraintsEncoder. Here is how you can use the new algorithms:

StandardScaler numericalScaler = MLBuilder.create(
    new StandardScaler.TrainingParameters(), 
    configuration
);
numericalScaler.fit_transform(trainingData);

CornerConstraintsEncoder categoricalEncoder = MLBuilder.create(
    new CornerConstraintsEncoder.TrainingParameters(), 
    configuration
);
categoricalEncoder.fit_transform(trainingData);

Another important update is the fact that the Feature Selection package was rewritten. Currently all feature selection algorithms focus on specific datatypes, making it possible to chain different methods together. As a result the TextClassifier and the Modeler classes receive a list of feature selector parameters rather than just one.

Speed & Memory improvements

As mentioned earlier all the algorithms now support disk-based training, including those that use Matrices (only exception is the Support Vector Machines). The new storage engine mechanism even makes it possible to configure some algorithms or dataframes to be stored in memory while others on disk. Several speed improvements were introduced primarily due to the new storage engine mechanism but also due to the tuning of individual algorithms such as the ones in the DPMM family.

Dependencies

Last but not least the new version updates all the dependencies to their latest versions and removes some of them such as the the commons-lang and lp_solve. The commons-lang, which was used for HTML parsing, is replaced with a faster custom HTMLParser implementation. The lp_solve is replaced with a pure Java simplex solver which means that Datumbox no longer requires specific system libraries installed on the operating system. Moreover lp_solve had to go because it uses LGPLv2 which is not compatible with the Apache 2.0 license.

The version 0.8.0 brings several more new features and improvements on the framework. For a detailed view of the changes please check the Changelog.

 

Don’t forget to clone the code of Datumbox Framework v0.8.0 from Github, check out the Code Examples and download the pre-trained Machine Learning models from Datumbox Zoo. I am looking forward to your comments and suggestions.

About 

My name is Vasilis Vryniotis. I'm a Machine Learning Engineer and a Data Scientist. Learn more

Latest Comments
  1. Simon G

    Very cool. An enormous effort.


Leave a Reply to Simon G Cancel reply

Your email address will not be published. Required fields are marked *

Captcha * Time limit is exhausted. Please reload the CAPTCHA.