From the Dev Team

Follow the latest developments from our technical team

This guest blog post is from Alyssa Jarrett, product marketing manager at Splice Machine. Splice Machine is a Hortonworks Certified Technology Partner and provides one of the only Hadoop RDBMS to power a new generation of real-time applications and operational analytics. With its recent Certification with HDP, Splice Machine offers a 10x price/performance improvement over traditional relational databases.

Built on top of the HDFS and Apache HBase components in the Hortonworks Data Platform (HDP), Splice Machine is delighted to announce that it has completed the required integration testing with HDP.…

This is the first post in a series that explores recent innovations in the Hadoop ecosystem that are included in HDP 2.2. In this post, we introduce themes to set context for deeper discussion in subsequent blogs.

HDP 2.2 represents another major step forward for Enterprise Hadoop. With thousands of enhancements across all elements of the platform spanning data access to security to governance, rolling upgrades and more, HDP 2.2 makes it even easier for our customers to incorporate HDP as a core component of Modern Data Architecture (MDA).…

In our series on Data Science and Hadoop, predicting airline delays, we demonstrated how to build predictive models with Apache Hadoop, using existing tools. In part 1, we employed Pig and Python; part 2 explored Spark, ML-Lib and Scala.

Throughout the series, the thesis, theme, topic, and algorithms were similar. That is, we wanted to dismiss the misconception that data scientists – when applying predictive learning algorithms, like Linear Regression, Random Forest or Neural Networks to large datasets – require dramatic changes to the tooling; that they need dedicated clusters; and that existing tools will not suffice.…

On December 18th, 2014, Hortonworks presented the last of 8 Discover HDP 2.2 webinars: Apache HBase with YARN & Slider for Fast NoSQL Access. Justin Sears, Jeff Sposetti and Mahadev Konar hosted the last webinar in the series.

After Justin Sears set the stage for the webinar by explaining the drivers behind Modern Data Architecture (MDA), Jeff Sposetti and Mahadev Konar introduced Apache Ambari and discussed Ambari innovations now included in HDP 2.2:

  • Configuration Enhancements, including Versioning & History
  • Ambari Administration, including Views Framework
  • Ambari Stacks “Stacks Advisor”

Here is the complete recording of the Webinar

Here are the presentation slides.

Apache HBase is the online database natively integrated with Hadoop, making HBase the obvious choice for applications that rely on Hadoop’s scale and flexible data processing. With the Hortonworks Data Platform 2.2, HBase High Availability has taken a major step forward, allowing apps on HBase to deliver 99.99% uptime guarantees. This blog takes a look at how HBase High Availability has improved over the past 12 months and how it will improve even more in the future.…

The Hadoop Distributed File System (HDFS) is the reliable and scalable data storage core of the Hortonworks Data Platform (HDP). In HDP, HDFS and YARN combine to form the distributed operating system for your data platform, providing resource management for diverse workloads and scalable data storage for the next generation of analytical applications.

In this blog, we’ll describe the key concepts introduced by Heterogeneous Storage in HDFS and how they are utilized to enable key tiered storage scenarios.…

Last year on December 11th, Hortonworks presented the sixth of 8 Discover HDP 2.2 webinars: Apache HBase with YARN & Slider for Fast NoSQL Access. Justin Sears, Carter Shanklin and Enis Soztutar hosted this 6th webinar in the series.

After Justin Sears set the stage for the webinar by explaining the drivers behind Modern Data Architecture (MDA), Carter Shanklin and Enis Soztutar introduced Apache HBase and discussed how to use it with Apace Hadoop YARN and Apache Slider for fast NoSQL access to your data.…

With YARN as its architectural center, Apache Hadoop continues to attract new engines to run within the data platform, as organizations want to efficiently store their data in a single repository and interact with it for batch, interactive and real-time streaming use cases. As more data flows into and through a Hadoop cluster to feed these engines, Apache Falcon is a crucial framework for simplifying data management and pipeline processing.

Falcon enables data architects to automate the movement and processing of datasets for ingest, pipeline, disaster recovery and data retention use cases.…

We take pride in producing valuable technical blogs and sharing it with a wider audience. Of all the blogs published in 2014 on our website, the following were most popular:

  • Improving Spark for Data Pipelines with Native YARN Integration.

    Gopal Vijayaraghavan and Oleg Zhurakousky demonstrate improved Apache Spark, which with the help of the pluggable execution context.

  • HDP 2.2 A Major Step Forward for Enterprise Hadoop

    Tim Hall outlines six months of innovation and new features across Apache Hadoop and its related projects.

  • Introduction

    Apache Ranger provides centralized security for the Enterprise Hadoop ecosystem, including fine-grained access control and centralized audit mechanism, all essential for Enterprise Hadoop. This blog covers various details of Apache Ranger’s audit framework options available with Apache Ranger Release 0.4.0 in HDP 2.2 and how they can be configured.

    The audit framework can be configured to send access audit logs generated by Apache Ranger plug-ins to one or more of the following destinations:

    • RDBMS: MySQL or Oracle
    • HDFS
    • Log4j appender
    Default Value xasecure.audit.is.enabled Setting to enable/disable audit logging in the Ranger plug-in.…

    On December 4th, Hortonworks presented the fifth of 8 Discover HDP 2.2 webinars: Apache Kafka and Apache Storm for Stream Data Processing. Taylor Goetz, Rajiv Onat, and Justin Sears hosted this 5th webinar in the series.

    After Justin Sears set the stage for the webinar by explaining the drivers behind Modern Data Architecture (MDA), Rajiv Onat and Taylor Goetz introduced and discussed how to use Apache Kafka and Apache Storm for stream data processing.…

    With Apache Hadoop YARN as its architectural center, Apache Hadoop continues to attract new engines to run within the data platform, as organizations want to efficiently store their data in a single repository and interact with it for batch, interactive and real-time streaming use cases. Apache Storm brings real-time data processing capabilities to help capture new business opportunities by powering low-latency dashboards, security alerts, and operational enhancements integrated with other applications running in the Hadoop cluster.…

    Hortonworks architects vertically integrate the projects within our Hadoop distribution with YARN and HDFS in order to enable HDP to span workloads from batch, interactive, and real time—across both open source and other data access technologies. In HDP 2.2, we deliver work to vertically integrate Apache Storm, Apache Accumulo and Apache HBase so that all of those long-running services run in Hadoop on YARN via Apache Slider.

    The Apache Slider community recently released Apache Slider 0.60.0.…

    On November 13th, Hortonworks presented the fourth of 8 Discover HDP 2.2 webinars: Rohit Bakhshi, Jitendra Pandey, and Justin Sears hosted this 4th webinar in the series.

    Rohit Bakhshi and Jitendra Pandey introduced HDP and discussed how to use HDFS for reliable, scalable, cost-efficient, and fault tolerant as a distributed data storage platform for your Modern Data Architecture (MDA). They also covered new HDFS data storage innovations now included in HDP 2.2:

    • Heterogeneous storage
    • Encryption
    • Operational security enhancements

    Here is the complete recording of the Webinar.…

    The Stinger.next initiative, with its focus on transactions, sub-second queries and SQL:2011 Analytics evolves Apache Hive to allow it to run most of the analytical workloads that are typical within a data warehouse, but now at petabyte scale. The first phase of Stinger.Next, delivered in Apache Hive 0.14 and in HDP 2.2, delivers transactions with ACID semantics a critical step in the evolution of the Hive as the defacto standard for SQL in Hadoop.…