Get Started


Ready to Get Started?

Download sandbox

How can we help you?

closeClose button

Apache Accumulo

A sorted, distributed key-value store with cell-based access control

Accumulo is a low-latency, large table data storage and retrieval system with cell-level security. Accumulo is based on Google’s Bigtable and it runs on YARN, the data operating system of Hadoop. YARN provides visualization and analysis applications predictable access to data in Accumulo.

What Accumulo Does

Accumulo was originally developed at the National Security Agency, before it was contributed to the Apache Software Foundation as an open-source incubation project. Due to its origins in the intelligence community, Accumulo provides extremely fast access to data in massive tables, while also controlling access to its billions of rows and millions of columns down to the individual cell. This is known as fine-grained data access control.

Cell-level access control is important for organizations with complex policies governing who is allowed to see data. It enables the intermingling of different data sets with access control policies for fine-grained access to data sets that have some sensitive elements. Those with permission to see sensitive data can work alongside co-worker without those privileges. Both users can access data in accordance with their permissions.

Without Accumulo, those policies are difficult to enforce systematically, but Accumulo encodes those rules for each individual data cell and controls fine-grained access.

Here is a list of some of Apache Accumulo’s most important features:

Feature Benefit
Table design and configuration
  • Includes cell tables for cell-level access control
  • Large rows need not fit into memory
Integrity and availability
  • Master fail-over with ZooKeeper locks
  • Write-ahead logs for recovery
  • Scalable master metadata store
  • Fault tolerant executor (FATE)
  • Relative encoding to compress similar consecutive keys
  • Speed long scans with parallel server threads
  • Cache recently scanned data
Data Management
  • Group columns within a single file
  • Automatic tablet splitting and rebalancing
  • Merge tablets and clone tables

How Accumulo Works

Accumulo stores sorted key-value pairs. Sorting data by key allows rapid lookups of individual keys or scans over a range of keys.  Since data is retrieved by key, the keys should contain the information that will be used to do the lookup.

  • If retrieving data by a unique identifier, the identifier should be in the key.
  • If retrieving data by its intrinsic features, such as values or words, the keys should contain those features.

The values may contain anything since they are not used for retrieval.

The original Big Table design has a row and column paradigm. Accumulo extends the column with an additional “visibility” label that provides the fine-grained access control.

Accumulo is written in Java, but a thrift proxy allows users to interact with Accumulo using C++, Python or Ruby. A pluggable user-authentication system allows LDAP connections to Accumulo. An HDFS class loader loads JARs from Hadoop Distributed File System (HDFS) to multiple servers. Accumulo also has connectors with other Apache projects such as Hive and Pig.


Try out the tutorial Analyzing Graph Data with Sqrrl and HDP

Hortonworks Focus for Accumulo

The Apache Accumulo community is working on these improvements:

  • replication of tables to other Accumulo instances
  • an improved client API
  • tracing integration with HDFS
  • improved metrics, including Ganglia integration

Recent Progress in Apache Accumulo

Version Progress
Version 1.7.0
  • Client Authentication with Kerberos – Kerberos is the de-facto means to provide strong authentication across Hadoop and other related components.
  • Data-Center Replication – primarily applicable to users wishing to implement a disaster recovery strategy. Data can be automatically copied from a primary instance to one or more other Accumulo instances.
  • User-Initiated Compaction Strategies – This allows surgical compactions on a subset of tablet files. Previously, a user-initiated compaction would compact all files in a tablet.
  • API Clarification – The declared API in 1.6.x was incomplete.
  • Performance Improvements – Configurable Threadpool Size for Assignments and Group-Commit Threshold as a Factor of Data Size
Version 1.6.0
  • Table namespaces – allows tables to be grouped into logical collections, for configuration and permission changes
  • Encryption – support for rfile and write ahead log encryption, as well as encrypting data over the wire using SSL
  • Support check-and-set
Version 1.5.0
  • Fix for PermGen leak from client API – stops background threads and avoids the OutOfMemoryError “PermGen space”
  • Thrift maximum frame size – allows users to configure the maximum frame size an Accumulo server will read
  • Support for Hadoop 2 – Since Apache Accumulo 1.5.0 was released, Apache Hadoop 2.2.0 was also released as the first generally available (GA) Hadoop 2 release. Accumulo 1.5.1 functions with both Hadoop 1 and Hadoop 2


Accumulo in our Blog

Webinars & Presentations