Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.
Thank you for subscribing!
Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.
Thank you for subscribing!
With the introduction of the Hortonworks Data Cloud (HDCloud), deploying clusters and starting to process data has become an order of magnitude faster. When Apache Hadoop evolved from being an on premise solution to a cloud based solution, the time it took to make a cluster went from weeks to days. The same magnitude of improvements has happened with HDCloud. All of the complex technical setup and management is transparently handled by HDCloud.
One of the first major use cases for HDCloud I became involved with was with a global retailer’s business intelligence team. This retailer needed to analyze terabytes of data but had no experience doing so. When they asked about the traditional solutions in the Hadoop ecosystem, they encountered a serious challenge: their IT teams had no internalized Hadoop knowledge and were unable to start the initiative without increasing headcount or attending additional training. This was a non-starter because to even begin to prove value, large budgets needed to be set aside and any project would be delayed by months. This is a classic chicken-and-egg situation.
So to eliminate this challenge, we decided to leverage HDCloud. I’m pleased to report that after only a few hours with our team, real data was loaded into a running cluster that was ready to process with Apache Hive. One of the major time-savings was that the same table DDLs were able to be exported from the legacy SQL database straight into Hive. Being able to leverage existing SQL assets brought an added benefit of eliminating a tremendous amount of technical risk.
At the end of our engagement, we were at a point that an experienced database administrator with no specific Hadoop skills was able to manipulate the data with ease. Overall, we found by using HDCloud and this extremely agile deployment strategy enabled teams to begin their big data journey with almost no barrier to entry. Plus, as the team began to gain confidence, Hortonworks is able to help them become power users and get the best performance out of the system.
Overall, Hortonworks Data Cloud provided me and this retailer the following advantages:
I believe the combination of ease of use, rapid deployment, and scalability will change the way companies can explore and process data, which translates how they go to market over the coming years.
If you are ready to get started with Hortonworks Data Cloud for AWS, go here for a 5 day free trial from the AWS Marketplace listing. For more information, please refer to the following links.
Product Webpage | https://hortonworks.com/products/cloud/aws |
Product Documentation | http://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.11.0/index.html |
“How To Get Started” Webinar | https://hortonworks.com/webinar/hadoop-in-the-cloud-aws/ |
11.17.17
9.11.17
9.7.17
9.6.17
8.30.17
8.29.17
8.28.17
8.25.17
8.24.17
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.
© 2011-2018 Hortonworks Inc. All Rights Reserved.
Tags: