Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button

Building native ETL/ELT on Hadoop without manual coding

Recorded on October 6th, 2015

Data professionals tend to see Hadoop as an extension of the data warehouse architecture and not a replacement; however it can reduce the overhead on expensive data warehouses by moving some of the data and processing to Hadoop. The Big Data framework has been extended beyond the warehouse to incorporate operational use cases such as customer insight 360, real-time offers, monetisation, and data archival. Generating value from big data requires the right tools to move and prepare data to effectively discover new insights. In order to operationalize those insights, new data must integrate securely with existing data, infrastructure, applications, and processes.

In this webinar you will see how Oracle and Hortonworks has made it possible for you to accelerate your Big Data Integration without having to learn MapReduce, Spark, Pig or Oozie code. In fact, Oracle is the only vendor that can automatically generate Spark, HiveQL and Pig transformations from a single mapping which allows our customers to focus on business value and the overall architecture rather than multiple programming languages.

Comments

  • Leave a Reply

    Your email address will not be published. Required fields are marked *