cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
cta
HDP Developer: Quick Start

cloud Register for Upcoming Courses

Schedule

OVERVIEW

This 4 day training course is designed for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Apache Pig and Apache Hive, and developing applications on Apache Spark.

Topics include: Essential understanding of HDP and its capabilities, Hadoop, YARN, HDFS, MapReduce/Tez, data ingestion, using Pig and Hive to perform data analytics on Big Data and an introduction to Spark Core, Spark SQL, Apache Zeppelin, and additional Spark features

Prerequisites

Students should be familiar with programming principles and have experience in software development. SQL and light scripting knowledge is also helpful. No prior Hadoop knowledge is required.

Target Audience

Developers and data engineers who need to understand and develop applications on HDP.

HDP Essentials: Day 1 Morning

Part I: High Level Overview (2.5 hrs)

icon6.png

Describe the Case for Hadoop

icon6.png

Identify the Hadoop Ecosystem via architectural categories

Part II: Deeper Look & Demos (2 hrs)

icon6.png

Detail the HDFS architecture

icon6.png

Describe data ingestion options and frameworks for batch and real-time streaming

icon6.png

Explain the fundamentals of parallel processing

icon6.png

Detail the architecture and features of YARN

icon6.png

Understand backup and recovery options

icon6.png

Describe how to secure Hadoop

Live Demonstrations

icon6.png

Operational overview with Ambari

icon6.png

Loading data into HDFS

Dev Pig & Hive: Day 1 Afternoon thru Day 2

Objectives

icon6.png

Use Pig to explore and transform data in HDFS

icon6.png

Transfer data between Hadoop and a relational database

icon6.png

Understand how Hive tables are defined and implemented

icon6.png

Use Hive to explore and analyze data sets

icon6.png

Explain and use the various Hive file formats

icon6.png

Create and populate a Hive table that uses ORC file formats

icon6.png

Use Hive to run SQL-like queries to perform data analysis

icon6.png

Use Hive to join datasets using a variety of techniques

icon6.png

Write efficient Hive queries

icon6.png

Explain the uses and purpose of HCatalog

icon6.png

Use HCatalog with Pig and Hive

Hands-On Labs

icon6.png

Use HDFS commands to add/remove files and folders

icon6.png

Use Sqoop to transfer data between HDFS and a RDBMS

icon6.png

Explore, transform, split and join datasets using Pig

icon6.png

Use Pig to transform and export a dataset for use with Hive

icon6.png

Use HCatLoader and HCatStorer

icon6.png

Use Hive to discover useful information in a dataset

icon6.png

Describe how Hive queries get executed as MapReduce jobs

icon6.png

Perform a join of two datasets with Hive

icon6.png

Use advanced Hive features: windowing, views, ORC files

Dev Spark: Day 3 thru Day 4

Objectives

icon6.png

Describe Spark and Spark specific use cases

icon6.png

Explore data interactively through the spark shell utility

icon6.png

Explain the RDD concept

icon6.png

Understand concepts of functional programming

icon6.png

Use the Python or Scala Spark APIs

icon6.png

Create all types of RDDs: Pair, Double, and Generic

icon6.png

Use RDD type-specific functions

icon6.png

Explain interaction of components of a Spark Application

icon6.png

Explain the creation of the DAG schedule

icon6.png

Build and package Spark applications

icon6.png

Use application configuration items

icon6.png

Deploy applications to the cluster using YARN

icon6.png

Use data caching to increase performance of applications

icon6.png

Understand join techniques

icon6.png

Learn general application optimization guidelines/tips

icon6.png

Create applications using the Spark SQL library

icon6.png

Create/transform data using dataframes

icon6.png

Read, use, and save to different Hadoop file formats

Spark Python or Scala Hands-On Labs

icon6.png

Create a Spark “Hello World” word count application

icon6.png

Use advanced RDD programming to perform sort, join, pattern matching and regex tasks

icon6.png

Explore partitioning and the Spark UI

icon6.png

Increase performance using data caching

icon6.png

Build/package a Spark application using Maven

icon6.png

Use a broadcast variable to efficiently join a small dataset to a massive dataset

icon6.png

Create a data frame and perform analysis

icon6.png

Load/transform/store data using Spark with Hive tables