The new version of Datumbox Machine Learning Framework has been released! Download it now from Github or Maven Central Repository.
What is new?
The main focus of version 0.6.0 is to extend the Framework to handle Large Data, improve the code architecture and the public APIs, simplify data parsing, enhance the documentation and move to a permissive license.
Let’s see in detail the changes of this version:
- Handle Large Data: The improved memory management and the new persistence storage engines enabled the framework to handle big datasets of several GB in size. Adding support of the MapDB database engine enables the framework to avoid storing all the data in memory and thus be able to handle large data. The default InMemory engine is redesigned to be more efficient while the MongoDB engine was removed due to performance issues.
- Improved and simplified Framework architecture: The level of abstraction is significantly reduced and several core components are redesigned. In particular the persistence storage mechanisms are rewritten and several unnecessary features and data structures are removed.
- New “Scikit-Learn-like” public APIs: All the public methods of the algorithms are changed to resemble Python’s Scikit-Learn APIs (the fit/predict/transform paradigm). The new public methods are more flexible, easier and more friendly to use.
- Simplify data parsing: The new framework comes with a set of convenience methods which allow the fast parsing of CSV or Text files and their conversion to Dataset objects.
- Improved Documentation: All the public/protected classes and methods of the Framework are documented using Javadoc comments. Additionally the new version provides improved JUnit tests which are great examples of how to use every algorithm of the framework.
- New Apache License: The software license of the framework changed from “GNU General Public License v3.0” to “Apache License, Version 2.0“. The new license is permissive and it allows redistribution within commercial software.
Since a large part of the framework was rewritten to make it more efficient and easier to use, the version 0.6.0 is not backwards compatible with earlier versions of the framework. Finally the framework moved from Alpha into Beta development phase and it should be considered more stable.
How to use it
In a previous blog post, we have provided a detailed installation guide on how to install the Framework. This guide is still valid for the new version. Additionally in this new version you can find several Code Examples on how to use the models and the algorithms of the Framework.
Next steps & roadmap
The development of the framework will continue and the following enhancements should be made before the release of version 1.0:
- Using Framework from console: Even though the main target of the framework is to assist the development of Machine Learning applications, it should be made easier to be used from non-Java developers. Following a similar approach as Mahout, the framework should provide access to the algorithms using console commands. The interface should be simple, easy to use and the different algorithms should easily be combined.
- Support Multi-threading: The framework currently uses threads only for clean-up processes and asynchronous writing into disk. Nevertheless some of the algorithms can be parallelized and this will significantly reduce the execution times. The solution in these cases should be elegant and should modify as little as possible the internal logic/maths of the machine learning algorithms.
- Reduce the use of 2d arrays & matrices: A small number of algorithms still uses 2d arrays and matrices. This causes all the data to be loaded into memory which limits the size of dataset that can be used. Some algorithms (such as PCA) should be reimplemented to avoid the use of matrices while for others (such as GaussianDPMM, MultinomialDPMM etc) we should use sparse matrices.
Other important tasks that should be done in the upcoming versions:
- Include new Machine Learning algorithms: The framework can be extended to support several great algorithms such as Mixture of Gaussians, Gaussian Processes, k-NN, Decision Trees, Factor Analysis, SVD, PLSI, Artificial Neural Networks etc.
- Improve Documentation, Test coverage & Code examples: Create a better documentation, improve JUnit tests, enhance code comments, provide better examples on how to use the algorithms etc.
- Improve Architecture & Optimize code: Further simplification and improvements on the architecture of the framework, rationalize abstraction, improve the design, optimize speed and memory consumption etc.
As you can see it’s a long road and I could use some help. If you are up for the challenge drop me a line or send your pull request on github.
I would like to thank Eleftherios Bampaletakis for his invaluable input on improving the architecture of the Framework. Also I would like to thank to ej-technologies GmbH for providing me with a license for their Java Profiler. Moreover my kudos to Jan Kotek for his amazing work in MapDB storage engine. Last but not least, my love to my girlfriend Kyriaki for putting up with me.
Don’t forget to download the code of Datumbox v0.6.0 from Github. The library is available also on Maven Central Repository. For more information on how to use the library in your Java project checkout the following guide or read the instructions on the main page of our Github repo.
I am looking forward to your comments and recommendations. Pull requests are always welcome! 🙂
My name is Vasilis Vryniotis. I'm a Machine Learning Engineer and a Data Scientist. Learn more
Leave a Reply