Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.


Sign up for the Developers Newsletter

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.


Get Started


Ready to Get Started?

Download sandbox

How can we help you?

* I understand I can unsubscribe at any time. I also acknowledge the additional information found in Hortonworks Privacy Policy.
closeClose button
April 05, 2017
prev slideNext slide

Announcing Hortonworks Data Cloud for AWS 1.14.1

We are pleased to announce the latest release of Hortonworks Data Cloud for AWS. Hortonworks Data Cloud (“HDCloud”) provides a quick and easy on-ramp for users looking to combine the agility of Amazon Web Services (“AWS”) with the data processing power of the Hortonworks Data Platform.

This new HDCloud release (version 1.14.1) further reduces the operational effort for administrators while providing powerful tools & technologies for data analysts. The new HDCloud release includes:

  • Support for Hortonworks Data Platform 2.6
  • Cluster Auto-Scaling
  • Node Auto-Repair
  • Built-In Protected Gateway

You can read all about the new features at this link but here is a brief summary:

Support for Hortonworks Data Platform 2.6

By adding support for Hortonworks Data Platform 2.6, HDCloud puts the newest innovations in your hands quickly. This includes: Apache Hive LLAP for fast, interactive analytics; Apache Spark 2.1 and Apache Zeppelin for data science; and a Technical Preview of Druid. This is in addition to the traditional workload cluster types for ETL processing.

Cluster Auto-Scaling

As an administrator, you have to make make sure the end user of the platform can get fast and reliable access to data. But user workloads often change, requiring you to adjust and expand the cluster to meet new demands. After you perform a cluster expansion and then the workload demand subsides, you want to be sure to “cut the cloud spend” and terminate the cloud resources.

With the Cluster Auto-Scaling feature, HDCloud can help you with this. You can define policies for scaling a cluster up (or down) based on workload activity. By watching resource usage, the system can add (or remove) nodes from a cluster to keep things running smoothly for the end user while avoiding use of excess resources. Additional, you can set up scaling boundaries so the system can avoid getting too far off track in either direction.

Node Auto-Repair

Even in the cloud, sometimes the infrastructure instances get into an “unstable state”. Keeping the platform running so the end user is minimally impacted is a key job for the operator. With the Node Auto-Repair feature, you can instruct the platform to watch for nodes becoming unhealthy. If that happens, the system can remove the affected node and replace it automatically.

Built-In Protected Gateway

A key security consideration of any platform starts at the network. Providing perimeter security for a cluster helps you minimize the access points and attack surface area. HDCloud now installs a Protected Gateway for each workload cluster. The Protected Gateway (powered by Apache Knox) provides a central access point to cluster resources for the end user (e.g. Hive JDBC, Zeppelin UI, etc). This makes the system a lot easier to administer, and reduces the need for you to open (and manage) security groups + ports.

Get Started

So that’s just a taste of the new features. Please checkout the following links to learn more about the product, the features and how to get started.

Have Fun!

Product Webpage
Product Documentation
“Get Started with HDCloud” Webinar



Dominika says:

To get started with HDCloud you can also follow this tutorial (No prior experience with AWS is required).

Leave a Reply

Your email address will not be published. Required fields are marked *