We are pleased to announce the latest release of Hortonworks Data Cloud for AWS. Hortonworks Data Cloud (“HDCloud”) provides a quick and easy on-ramp for users looking to combine the agility of Amazon Web Services (“AWS”) with the data processing power of the Hortonworks Data Platform.
This new HDCloud release (version 1.14.1) further reduces the operational effort for administrators while providing powerful tools & technologies for data analysts. The new HDCloud release includes:
You can read all about the new features at this link but here is a brief summary:
By adding support for Hortonworks Data Platform 2.6, HDCloud puts the newest innovations in your hands quickly. This includes: Apache Hive LLAP for fast, interactive analytics; Apache Spark 2.1 and Apache Zeppelin for data science; and a Technical Preview of Druid. This is in addition to the traditional workload cluster types for ETL processing.
As an administrator, you have to make make sure the end user of the platform can get fast and reliable access to data. But user workloads often change, requiring you to adjust and expand the cluster to meet new demands. After you perform a cluster expansion and then the workload demand subsides, you want to be sure to “cut the cloud spend” and terminate the cloud resources.
With the Cluster Auto-Scaling feature, HDCloud can help you with this. You can define policies for scaling a cluster up (or down) based on workload activity. By watching resource usage, the system can add (or remove) nodes from a cluster to keep things running smoothly for the end user while avoiding use of excess resources. Additional, you can set up scaling boundaries so the system can avoid getting too far off track in either direction.
Even in the cloud, sometimes the infrastructure instances get into an “unstable state”. Keeping the platform running so the end user is minimally impacted is a key job for the operator. With the Node Auto-Repair feature, you can instruct the platform to watch for nodes becoming unhealthy. If that happens, the system can remove the affected node and replace it automatically.
A key security consideration of any platform starts at the network. Providing perimeter security for a cluster helps you minimize the access points and attack surface area. HDCloud now installs a Protected Gateway for each workload cluster. The Protected Gateway (powered by Apache Knox) provides a central access point to cluster resources for the end user (e.g. Hive JDBC, Zeppelin UI, etc). This makes the system a lot easier to administer, and reduces the need for you to open (and manage) security groups + ports.
So that’s just a taste of the new features. Please checkout the following links to learn more about the product, the features and how to get started.
|“Get Started with HDCloud” Webinar||https://hortonworks.com/webinar/hadoop-in-the-cloud-aws/|