Apache Spark 2.0 was released yesterday in the community. This is a long awaited release that delivers several key features. We are really excited about this release and sincerely thank the Apache Software Foundation and Apache Spark communities for making this release possible. The most notable improvements in this release are in the areas of API, Performance, Structured Streaming and SparkR. Let’s review some of these improvements:
The unification of DataFrame and DataSet is now complete. The DataFrame remains the primary interface in R and Python. Another improvement is the elimination of the need to deal with multiple contexts (SparkContext, SQLContext, HiveContext). The SparkSession context, represented by the variable ‘spark’, is the new entry point to all the awesome Spark features, and the other contexts have been deprecated.
Project Tungsten has completed another major phase and with new completely new stage code generation, significant performance improvements have been delivered. Parquet and ORC file processing have also delivered performance improvements.
The DataFrame is the preferred Spark abstraction since it delivers both ease of use through better abstraction and superior performance through the Catalyst optimizer. The new Structured Streaming API delivers streaming with the same DataFrame API that we love.
With Spark 2.0, SparkR now delivers new algorithms like naive-bayes, k-means clustering and survival regression. The machine learning persistence is also improved and save and load are supported on all models.
There are many other significant improvements and a full list is available from Apache Spark.
At Hortonworks we have always delivered the latest Apache Spark shortly after it is released in Apache, and this time is no different. We are going to deliver Apache Spark 2.0 in the following ways:
We congratulate the Spark community on this major milestone and we continue to deeply participate in the Spark community to deliver enterprise-ready Apache Spark. The best is yet to come, stay tuned.