One aspect of community development of Apache Hadoop is the way that everyone working on Hadoop -full time, part time, vendors, users and even some researchers all collaborate together in the open. This developed is based on publicly accessible project tools: Apache Subversion for revision control, Apache Maven for the builds; Jenkins for automating those builds and tests. Central to a lot of work is the Apache JIRA server, an instance of Atlassian’s issue management tool.…
The Hortonworks Blog
Apache Hadoop has always been very fussy about Java versions. It’s a big application running across tens of thousands of processes across thousands of machines in a single datacenter. This makes it almost inevitable that any race conditions and deadlock bugs in the code will eventually surface – be it in the Java JVM and libraries, in Hadoop itself, or in one of the libraries on which it depends.
Hence the phrase “there are no corner cases in a datacenter”.…
One of the great things about working in open source development is working with other experts round the work on big projects – and then having the results of that work in the hands of users within a short period of time.
This is why I’m really excited about the Rackspace announcement of their HDP-based Big Data offerings, both “on-prem” and in cloud. Not just because its partners of us offering a service based on Hadoop, but because it shows how Hadoop integration with OpenStack has reached a point where it’s ready for production use.…
In the last Hoya article, we talked about the its Application Architecture. Now let’s talk persistence. A key use case for Hoya is: support long-lived clusters that can be started and stopped on demand. This lets a user start and stop an HBase cluster when they want, only using CPU and memory resources when they actually need it. For example, a specific MR job could use a private HBase instance as part of its join operations, or for an intermediate store of results in a workflow.…
At Hadoop Summit in June, we introduced a little project we’re working on: Hoya: HBase on YARN. Since then the code has been reworked and is now up on Github. It’s still very raw, and requires some local builds of bits of Hadoop and HBase – but it is there for the interested.
In this article we’re going to look at the architecture, and a bit of the implementation.
We’re not going to look at YARN in this article -for that we have a dedicated section of the Hortonworks site -including sample chapters of Arun Murthy’s forthcoming book.…
In the last few weeks, we have been getting together a prototype, Hoya, running HBase On YARN. This is driven by a few top level use cases that we have been trying to address. Some of them are:
- Be able to create on-demand HBase clusters easily -by and or in apps
- With different versions of HBase potentially (for testing etc.)
- Be able to configure different Hbase instances differently
- For example, different configs for read/write workload instances
- Better isolation
- Run arbitrary co-processors in user’s private cluster
- User will own the data that the hbase daemons create
- MR jobs should find it simple to create (transient) HBase clusters
- For Map-side joins where table data is all in HBase, for example
- Elasticity of clusters for analytic / batch workload processing
- Stop / Suspend / Resume clusters as needed
- Expand / shrink clusters as needed
- Be able to utilize cluster resources better
- Run MR jobs while maintaining HBase’s low latency SLAs
The Hoya tool is a Java tool, and is currently CLI driven.…
A recurrent question on the various Hadoop mailing lists is “why does Hadoop prefer a set of separate disks to the same set managed as a RAID-0 disks array?”
It’s about time and snowflakes.
JBOD and the Allure of RAID-0
In Hadoop clusters, we recommend treating each disk separately, in a configuration that is known, somewhat disparagingly as “JBOD”: Just a Box of Disks.
In comparison RAID-0, which is a bit of misnomer, there being no redundancy, stripes data across all the disks in the array.…
As part of Big Data Week, Dan Harvey of the London Hadoop User Group organised an afternoon session for the usergroup, which we were glad to sponsor, along with Canonical and Facegroup. I had the pleasure of presenting my view of the current and future status of Apache Hadoop to an audience that ranged from those curious about Hadoop to heavy users.
Every talk of the day was excellent, from the use cases by Datasift, Mendeley and MusicMetric, to the talk by Francine Bennett of MastodonC on the CO2 footprint of different cloud computing infrastructures, including a live dashboard on the current CO2/hour of many cloud infrastructure sites.…