With YARN as the architectural center of Apache™ Hadoop, multiple data access engines such as Apache HBase interact with data stored in the cluster. HBase is an open source NoSQL database that provides real-time read/write access to those large datasets.
Apache HBase scales linearly to handle huge data sets with billions of rows and millions of columns, and it easily combines data sources that use a wide variety of different structures and schemas. Because HBase is natively integrated with Hadoop, works seamlessly with the entire Hadoop ecosystem through YARN.
Hortonworks Focus for HBase
As HBase evolves, the community is working on continued improvements to its performance, integration options and developer accessibility.
|Performance||By taking advantage of emerging technologies like HDFS heterogeneous storage and more effective use of RAM|
|Integration||Support for streaming technologies including Apache Storm and Spark Streaming|
|Developer access||From a variety of development environments, including Java, .NET, and Python|
Recent Progress in HBase
This table summarizes recent innovation in Apache HBase.
|Apache HBase trunk, used in HDP 2.2||
What HBase Does
Apache HBase provides random, real time access to your data in Hadoop. It was created for hosting very large tables, making it a great choice to store multi-structured or sparse data. Users can query HBase for a particular point in time, making “flashback” queries possible. These following characterisitcs make HBase a great choice for storing semi-structured data like log data and then providing that data very quickly to users or applications integrated with HBase.
Enterprises use Apache HBase’s low latency storage for scenarios that require real-time analysis and tabular data for end user applications. One company that provides web security services maintains a system accepting billions of event traces and activity logs from its customer’ desktops every day. The company’s programmers can tightly integrate their security solutions with HBase (to assure that the protection they provide keeps pace with real-time changes in the threat landscape.)
Another company provides stock market ticker plant data that its users query more than thirty thousand times per second, with an SLA of only a few milliseconds. Apache HBase provides that super low-latency access over an enormous, rapidly changing data store.
How HBase Works
HBase scales linearly by requiring all tables to have a primary key. The key space is divided into sequential blocks that are then allotted to a region. RegionServers own one or more regions, so the load is spread uniformly across the cluster. If the keys within a region are frequently accessed, HBase can further subdivide the region by splitting it automatically, so that manual data sharding is not necessary.
ZooKeeper and HMaster servers make information about the cluster topology available to clients. Clients connect to these and download a list of RegionServers, the regions contained within those RegionServers and the key ranges hosted by the regions. Clients know exactly where any piece of data is in HBase and can contact the RegionServer directly without any need for a central coordinator.
RegionServers include a memstore to cache frequently accessed rows in memory. Optionally, users can store rows off-heap, caching gigabytes of data while minimizing pauses for garbage collection.
Apache HBase provides high availability in several ways:
- Highly available cluster topology information through production deployments with multiple HMaster and ZooKeeper instances
- Data distribution across many nodes means that loss of a single node only affects data stored on that node
- HBase HA allows data storage, ensuring that loss of a single node does not result in loss of data availability
- HFile format stores data directly in HDFS. HFile can be read or written to by Apache Hive, Apache Pig, MapReduce, and Apache Tez, permitting deep analytics on HBase without data movement
Try these Tutorials
Try HBase with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with HDP running alongside a set of hands-on, step-by-step Hadoop tutorials.Get Sandbox