The Modern Data Architecture, Applied

Use Cases for Apache Hadoop + Your Existing Technologies = Real Value in the Enterprise
Extracting insight from your machines, or customer sentiment data or any number of scenarios related to big data demands the integration of Hadoop into your data architecture to efficiently handle those new opportunities alongside the existing workloads.
Register now to find out what it means to integrate Hadoop into your data architecture :

Upcoming Webinars

Webinars need to be tagged with the ‘Modern Data Architecture’ category in order to appear on this page.
Coming Next :
Thursday, October 2, 2014

YARN Ready: Using Spark to Integrate to YARN

12:00 PM Eastern / 9:00 AM Pacific
As the ratio of memory to processing power rapidly evolves, many within the Hadoop community are gravitating towards Apache Spark for fast, in-memory data processing. And with YARN, they use Spark for machine learning and data science use cases along side other workloads simultaneously. This is a continuation of our YARN Ready Series, aimed at helping developers learn the different ways to integrate to YARN and Hadoop. Tools and applications that are YARN Ready have been verified to work within YARN. There are a few ways to integrate, one of which is natively. Others include Slider, Tez, Scalding, Spark and more. Ambari is for management.
Tuesday, October 21, 2014

Supporting Financial Services with a More Flexible Approach to Big Data

2:00 PM Eastern / 11:00 AM Pacific
Financial services companies can reap tremendous benefits from 'Big Data' and they have moved quickly to deploy it.  But these companies also place heavy demands on 'Big Data' infrastructure for flexibility, reliability and performance.   In this webinar, Hortonworks joins WANDisco to look at three examples of using 'Big Data' to get a more comprehensive view of customer behavior and activity in the banking and insurance industries.  Then we'll pull out the common threads from these examples, and see how a flexible next-generation Hadoop architecture lets you get a step up on improving your business performance.  Join us to learn:
  • How to leverage data from across an entire global enterprise
  • How to analyze a wide variety of structured and unstructured data to get quick, meaningful answers to critical questions
  • What industry leaders have put in place
Wednesday, October 29, 2014

Big Data Virtual Meetup Chennai

9:00 pm India Time / 8:30 am Pacific Time / 4:30 pm Europe Time (Paris)
Big Data is moving to the next level of maturity and it’s all about the applications. Dhruv Kumar, one of the minds behind Cascading, the most widely used and deployed development framework for building Big Data applications, will discuss how Cascading can enable developers to accelerate the time to market for their data applications, from development to production. In this session, Dhruv will introduce how to easily and reliably develop, test, and scale your data applications and then deploy them on Hadoop and Hortonworks Data Platform. He will also explain the growth behind Cascading and talk about Cascading’s future with Tez. Join us at 8:30 am Pacific time or 9:00 pm India time October 29, 2014.
Tuesday, November 11, 2014

Data Transformation and Acquisition Techniques, to Handle Petabytes of Data

12:00 PM Eastern / 9:00 AM Pacific
Many organizations have become aware of the importance of big data technologies, such as Apache Hadoop but are struggling to determine the right architecture to integrate it with their existing analytics and data processing infrastructure. As companies are implementing Hadoop, they need to learn new skills and languages, which can impact developer productivity. Often times they resort to hand-coded solutions which can be brittle, impact the productivity of the developer and the efficiency of the Hadoop cluster. To truly tap into the business benefits of the big data solutions, it’s necessary to ensure that the business and IT have simple tools-based methods to get data in, change and transform it, and keep it continuously updated with their data warehouse. In this webinar you’ll learn how the Oracle and Hortonworks solution can:
  • Accelerate developer productivity
  • Optimize data transformation workloads for on Hadoop
  • Lower cost of data storage and processing
  • Minimize risks in deployment of big data projects
  • Provide proven industrial scale tooling for data integration projects
In this webinar we’ll discuss how technologies from both Oracle and Hortonworks can deploy the big data reservoir or data lake, an efficient cost-effective way to handle petabyte-scale data staging, transformations, and aged data requirements while reclaiming compute power and storage from your existing data warehouse.   Presenters: Jeff Pollock, Vice President, Product Management, Oracle Tim Hall, Vice President, Product Management, Hortonworks

Past Webinars

Recorded on : Oct 1, 2014

The Modern Data Architecture for Risk Management with Apache Hadoop and Splunk Inc.

Presented with Splunk
More Info
Recorded on : Sep 24, 2014

Planning for the Impacts of Big Data in the Data Center with HP and Hortonworks

Presented with HP
More Info
Recorded on : Sep 22, 2014

Retail Insights: What's Possible with a Modern Data Architecture?

More Info
Recorded on : Sep 18, 2014

YARN Ready: Developing Applications on Hadoop with Scalding

Presented with Concurrent
More Info

More Webinars »

Contact Us
Hortonworks provides enterprise-grade support, services and training. Discuss how to leverage Hadoop in your business with our sales team.
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
HDP 2.1 Webinar Series
Join us for a series of talks on some of the new enterprise functionality available in HDP 2.1 including data governance, security, operations and data access :

Thank you for subscribing!