The Hortonworks Blog

This is the second of two posts examining the use of Hive for interaction with HBase tables. This is a hands-on exploration so the first post isn’t required reading for consuming this one. Still, it might be good context.

“Nick!” you exclaim, “that first post had too many words and I don’t care about JIRA tickets. Show me how I use this thing!”

This is post is exactly that: a concrete, end-to-end example of consuming HBase over Hive.…

The closing date for submitting Speaking Abstracts for Hadoop Summit Europe is NOVEMBER 22.  If you are interested in speaking this year, we encourage you to submit your topic today.  The Call for Abstracts will close at midnight on November 22, 2013 (the final midnight on the planet… Kiribati time?).

Some notes about Summit

We will return to Amsterdam for Hadoop Summit Europe this year (April 2-3, 2014). Also, we have restructured the content for the event to make sure we, as a community, continue to deliver high value technical content.…

Join Hortonworks and Pactera for a Webinar on Unlocking Big Data’s Potential in Financial Services Thursday, November 21st at 12:00 EST.

Have you ever had your debit or credit card declined for seemingly no reason? Turns out, the rejections are not so random. Banks are increasingly turning to analytics to predict and prevent fraud in real-time. That can sometimes be an inconvenience for customers who are traveling or making large purchases, but it’s necessary inconvenience today in order for banks to reduce billions in losses due to fraud.…

This is the first of two posts examining the use of Hive for interaction with HBase tables. The second post is here.

One of the things I’m frequently asked about is how to use HBase from Apache Hive. Not just how to do it, but what works, how well it works, and how to make good use of it. I’ve done a bit of research in this area, so hopefully this will be useful to someone besides myself.…

I teach for Hortonworks and in class just this week I was asked to provide an example of using the R statistics language with Hadoop and Hive. The good news was that it can easily be done. The even better news is that it is actually possible to use a variety of tools: Python, Ruby, shell scripts and R to perform distributed fault tolerant processing of your data on a Hadoop cluster.…

This post is authored by Omkar Vinit Joshi with Vinod Kumar Vavilapalli and is the ninth post in the multi-part blog series on Apache Hadoop YARN – a general-purpose, distributed, application management framework that supersedes the classic Apache Hadoop MapReduce framework for processing data in Hadoop clusters. Other posts in this series:

Introduction

In the previous post, we explained the basic concepts of LocalResources and resource localization in YARN.…

This post is the seventh in our series on the motivations, architecture and performance gains of Apache Tez for data processing in Hadoop. The series has the following posts:

In Tez, we recently introduced the support of a feature that we call “Tez Sessions”.…

Using Hadoop as an enterprise data platform means great integration with other technologies in the data center.

To that end, the Hortonworks Sandbox Partner Gallery showcases how our partners’ solutions integrate with Hadoop and provide you with easy access to learn how to use those solutions with the Hortonworks Data Platform via the Sandbox.

Don’t have the Sandbox? Get your free download of this single node Hadoop environment that’s delivered as a Virtual Machine that you can run on your laptop.…

When I first started to understand what YARN is, I wanted to build an application to understand its core. There was already a great example YARN application called Distributed Shell that I could use as the shell (pun intended) for my experiment. Now I just needed an existing application that could provide massive reuse value by other applications. I looked around and I decided on MemcacheD.

This brief guide shows how to get MemcacheD up and running on YARN – MOYA if you will…

Prerequisites

You’re going to need a few things to get the sample application operational.…

We had a lot of fun in NYC and hope you did too. Thanks to the hundreds of you who dropped by the booth, attended dinners, parties, meetups and sessions.

As we have known for some time, Hortonworks customers are already building a modern data architecture with Hadoop as the technology of choice for handling the data they have streaming in from all directions. They care that it matches their needs, integrates with their existing infrastructure and solves real problems with flexibility.…

We’re delighted to announce that our Hadoop 2.0 Courseware is available now!

According to a 2013 Education Services Bench Mark Study conducted by The Technology Services Industry Association, “the lag time between product and Instructor Lead Content release is 68 business days, or more than three months” but not at Hortonworks!  All of our Developer Courses and Certifications are now based on Apache Hadoop 2.0 and available at the same time as the Hortonworks Data Platform 2.0.…

One of the great things about working in open source development is working with other experts round the work on big projects – and then having the results of that work in the hands of users within a short period of time.

This is why I’m really excited about the Rackspace announcement of their HDP-based Big Data offerings, both “on-prem” and in cloud. Not just because its partners of us offering a service based on Hadoop, but because it shows how Hadoop integration with OpenStack has reached a point where it’s ready for production use.…

The Apache Knox community announced the release of the Apache Knox Gateway (Incubator) 0.3.0. We, at Hortonworks, are excited about this announcement.

The Apache Knox Gateway is a REST API Gateway for Hadoop with a focus on enterprise security integration.  It provides a simple and extensible model for securing access to Hadoop core and ecosystem REST APIs.

Apache Knox provides pluggable authentication to LDAP and trusted identity providers as well as service level authorization and more.  …

With the attention of the Hadoop community on Strata/Hadoop World in New York this week, it’s seems an appropriate time to give everyone an early update on continued community development of Apache Hive. This progress well and truly cements Hive as the standard open-source SQL solution for the Apache Hadoop ecosystem for not just extremely large-scale, batch queries but also for low-latency, human-interactive queries.

You can catch me at our session ‘Apache Hive & Stinger: Petabyte Scale SQL, IN Hadoop’ along with Owen and Alan where we’ll be happy to dive into more of the details.…

Last week we announced the availability of the Hortonworks Data Platform 2.0. Today, we’re delighted to announce the availability of the Hortonworks Sandbox 2.0.

New Features
  • Based on HDP 2.0
  • Easy enablement of Ambari and Hbase
  • Updated tutorial navigation
HDP 2.0

This version of the Sandbox provides you a complete HDP 2.0 environment. Your own personal single-node Hadoop cluster where you can explore the new features and enhancements of HDP 2.0, including YARN, the improvements to Hive that were delivered by the Stinger initiative, along with the updates to Hbase, Pig, and Ambari.In fact, our Sandbox has all of the most current releases of the various Apache Projects — like Hive 12, HBase 96, and Hadoop 2.2.…

Go to page:« First...10...1415161718...3040...Last »