An algorithm library for scalable machine learning on Hadoop
Once big data is stored on the Hadoop Distributed File System (HDFS), Mahout provides the data science tools to automatically find meaningful patterns in those big data sets. The Apache Mahout project aims to make it faster and easier to turn big data into big information.
Mahout supports four main data science use cases:
Mahout provides an implementation of various machine learning algorithms, some in local mode and some in distributed mode (for use with Hadoop). Each algorithm in the Mahout library can be invoked using the Mahout command line.
The following is a list of algorithms for use in distributed mode (Hadoop-compatible), classified by the four categories: collaborative filtering, clustering, classification or frequent itemset mining. Mahout also includes some machine learning algorithms that can be used locally, but those are not listed here. For a complate list of algorithms, please visit http://mahout.apache.org/users/basics/algorithms.html.
|Distributed Item-based Collaborative Filtering||Collaborative Filtering||Estimates a user’s preference for one item by looking at his/her preferences for similar items|
|Collaborative Filtering Using a Parallel Matrix Factorization||Collaborative Filtering||Among a matrix of items that a user has not yet seen, predict which items the user might prefer|
|Canopy Clustering||Clustering||For preprocessing data before using a K-means or Hierarchical clustering algorithm|
|Dirichlet Process Clustering||Clustering||Performs Bayesian mixture modeling|
|Fuzzy K-Means||Clustering||Discovers soft clusters where a particular point can belong to more than one cluster|
|Hierarchical Clustering||Clustering||Builds a hierarchy of clusters using either an agglomerative “bottom up” or divisive “top down” approach|
|K-Means Clustering||Clustering||Aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean|
|Latent Dirichlet Allocation||Clustering||Automatically and jointly cluster words into “topics” and documents into mixtures of topics|
|Mean Shift Clustering||Clustering||For finding modes or clusters in 2-dimensional space, where the number of clusters is unknown|
|Minhash Clustering||Clustering||For quickly estimating similarity between two data sets|
|Spectral Clustering||Clustering||Cluster points using eigenvectors of matrices derived from the data|
|Bayesian||Classification||Used to classify objects into binary categories|
|Random Forests||Classification||An ensemble learning method for classification (and regression) that operate by constructing a multitude of decision trees|
|Parallel FP Growth Algorithm||Frequent Itemset Mining||Analyzes items in a group and then identifies which items typically appear together|
Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]
A very common request from many customers is to be able to index text in image files; for example, text in scanned PNG files. In this tutorial we are going to walkthrough how to do this with SOLR. Prerequisites Download the Hortonworks Sandbox Complete the Learning the Ropes of the HDP Sandbox tutorial. Step-by-step guide […]
Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]
Introduction JReport is a embedded BI reporting tool can easily extract and visualize data from the Hortonworks Data Platform 2.3 using the Apache Hive JDBC driver. You can then create reports, dashboards, and data analysis, which can be embedded into your own applications. In this tutorial we are going to walkthrough the folllowing steps to […]
Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]
The Hortonworks Sandbox is delivered as a Dockerized container with the most common ports already opened and forwarded for you. If you would like to open even more ports, check out this tutorial.
Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, HAWQ, Zeppelin, Atlas, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.