We are pleased to announce the latest release of Hortonworks Data Cloud for AWS. This release (version 1.11 for those that are keeping score) continues to drive towards the goal of making data processing easy and cost effective in the cloud.
For those that aren’t familiar with Hortonworks Data Cloud for AWS (or “HDCloud” for short), this product allows you to quickly launch workload clusters (based on Apache Hadoop, Apache Hive and Apache Spark and powered by the Hortonworks Data Platform) that are configured and pre-tuned to run in AWS. Using HDCloud, you can launch a cluster and start performing analysis for data science & exploration, data preparation & ETL and data analytics & reporting.
This latest release builds on the inaugural HDCloud release by adding new features that help you optimize your cloud spend and expand the kinds of workloads you can run. You can read all the details in the product documentation:
But for kicks, here are some highlights…
Compute nodes do just what it sounds like: allows you to include nodes in the cluster that are for compute work. You can optionally include Compute nodes in your cluster (in addition to Worker nodes) to expand your workload processing power independent of storage.
To help you optimize your usage of Compute nodes, there is an option to include these nodes using either on-demand or spot instances. With on-demand instances, you can specify the amount of nodes to use in the cluster (and resize the cluster accordingly). Or you can ask for an amount of Compute nodes using a spot pricing bid, which will opportunistically add Compute nodes to the cluster based on getting instances from AWS at that bid price. I think you can see that Compute Nodes and Spot Instances are a really powerful combination.
HDCloud exposes a good set of options for users when creating their clusters. For example: you can pick from a set of curated Cluster Type Configurations, choose Master, Worker or Compute node instance types independently and you can setup different security & networking choices.
With the introduction of Node Recipes, you can go a step further with customizing your cluster. Using Node Recipes, you can specify scripts to execute pre- and/or post-install for customizing the cluster and installing additional software that your workloads need.
Welp, enough talking. Please checkout the following resources to learn more about the product and to get started with the latest release of HDCloud.
|AWS Marketplace Listing||https://aws.amazon.com/marketplace/pp/B01LXOQBOU|
|“How To Get Started” Webinar||http://hortonworks.com/webinar/hadoop-in-the-cloud-aws/|