The Hortonworks Blog

Geoff Flood is president of T4G Limited and vice-chairman of the province of New Brunswick Research & Innovation Council. In this guest blog, Geoff elaborates on why “partnering with Hortonworks was simply a no-brainer for us. It’s a decision that will deliver prized and measurable value to our customers.”

Big data is more than just buzz; it’s a big deal. It’s changing everything in our lives and all around us. As president of a successful technology services firm in Canada, I knew we had to change, too, when it comes to designing, developing and implementing solutions for our customers across North America.…

Haohui Mai is a member of technical staff at Hortonworks in the HDFS group and a core Hadoop committer. In this blog, he explains how to setup HTTPS for HDFS in a Hadoop cluster.

1. Introduction

The HTTP protocol is one of the most widely used protocols in the Internet. Today, Hadoop clusters exchange internal data such as file system images, the quorum journals, and the user data through the HTTP protocol.…

I can’t believe it’s been 6 months since we first announced for expanded strategic alliance with Red Hat. For those that have been following this partnership, you know our goal is simple — to help organizations adopt enterprise Apache Hadoop as part of their modern data architecture. Our expanded relationship with Red Hat is closely aligned around a strategy of innovating in the open and applying enterprise rigor to open source software, thereby de-risking it for the enterprise, and allowing faster adoption for Enterprise Apache Hadoop.…

We are excited to announce that Apache Kafka 0.8.1.1 is now available as a technical preview with Hortonworks Data Platform 2.1. Kafka was originally developed at LinkedIn and incubated as an Apache project in 2011. It graduated to a top-level Apache project in October of 2012.

Many organizations already use Kafka for their data pipelines, including Hortonworks customers like Spotify and Tagged.

What is Apache Kafka?

Apache Kafka is a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system.…

Chaos Before The Storm … and a Brief History

For its name and the metaphoric image it evokes, Apache Storm lives up to its purpose and promise: to ingest, absorb, and digest an avalanche of real-time data as a stream of unbounded discrete events at scale, speed, and success.

Before Storm, developers used a set of queues and workers to process a stream of real-time events. That is, events were placed on a worker queues, and worker threads plucked events and processed them—transforming, persisting or forwarding them to another queue for further processing.…

Sheetal Dolas is a Principal Architect at Hortonworks. As part of Apache Storm design patterns’ series blog, he explores three options for micro-batching using Apache Storm’s core APIs. This is the first blog in the series.

What is Micro-batching?

Micro-batching is a technique that allows a process or task to treat a stream as a sequence of small batches or chunks of data. For incoming streams, the events can be packaged into small batches and delivered to a batch system for processing [1]

Micro-batching in Apache Storm

In Apache Storm, micro-batching in core Storm topologies makes sense for performance or for integration with external systems (like ElasticSearch, Solr, HBase or a database).…

Since our founding in mid-2011, our vision for Hadoop has been that “half the world’s data will be processed by Hadoop”. With that long-term vision in mind, we focus on the mission to establish Hadoop as the foundational technology of the modern enterprise data architecture that unlocks a whole new class of data-driven applications that weren’t previously possible.

We use what we call the “Blueprint for Enterprise Hadoop” for guiding how we invest in Hadoop-related open source technologies as well as enabling the key integration points that are important for deploying Enterprise Hadoop within a modern data architecture, on-premises or in the cloud, in a way that enables the business and its users to maximize the value from their data.…

YARN and Apache Storm: A Powerful Combination

YARN changed the game for all data access engines in Apache Hadoop. As part of Hadoop 2, YARN took the resource management capabilities that were in MapReduce and packaged them for use by new engines. Now Apache Storm is one of those data-processing engines that can run alongside many others, coordinated by YARN.

YARN’s architecture makes it much easier for users to build and run multiple applications in Hadoop, all sharing a common resource manager.…

The open source community, including Hortonworks, has invested heavily in building enterprise grade security for Apache Hadoop. These efforts include Apache Knox for perimeter security, Kerberos for strong authentication and the recently announced Apache Argus incubator that brings a central administration framework for authorization and auditing.

Join Hortonworks and Voltage Security in a webinar on August 27  to learn more.

In multi-platform environments with data coming from many different sources, personally identifiable information, credit card numbers, and intellectual property can land in the Hadoop cluster.…

This summer, Hortonworks presented the Discover HDP 2.1 Webinar series. Our developers and product managers highlighted the latest innovations in Apache Hadoop and related Apache projects.

We’re grateful to the more than 1,000 attendees whose questions added rich interaction to the pre-planned presentations and demos.

For those of you that missed one of the 30-minute webinars (or those that want to review one they joined live), you can find recordings of all sessions on our What’s New in 2.1 page.…

Zettaset is a Hortonworks partner. In this guest blog, John Armstrong, VP of Marketing at Zettaset Inc., shares Zettaset’s security features and explains why data encryption is vital for data in the Hadoop infrastructure.

Comprehensive Security Across the Hadoop Infrastructure

As big data technologies like Hadoop become widely deployed in production environments, the expectation is that they will meet the enterprise requirements in data governance, operations and security while integrating with existing data center infrastructure. …

The key to monetization of Big Data is not only the ability to capture and process information quickly but also to analyze the data to derive meaningful insights.  Big Data can be complex, and often the expertise of a programmer is needed to create focused and targeted queries.

0xdata, a provider of open source machine learning and predictive analytics for Big Data, helps to facilitate the manipulation and extraction of data with the use of its H2O prediction engine for statisticians. …

With the release of Apache Hadoop YARN in October of last year, more and more solution providers are moving from single-application Hadoop clusters to a versatile, integrated Hadoop 2 data platform. This allows them to host multiple applications — eliminating silos, maximizing resources and bringing true multi-workload capabilities to Hadoop. 

That is why we’re  extremely excited to have Paul Kent, Vice President of Big Data at SAS, share his insights on the value of Apache Hadoop YARN and the benefits it brings to SAS and its users. …

Few industries depend as heavily on data as financial services. Insurance companies, retail and investment banks aggregate, price and distribute capital with the aim of increasing their return on assets with an acceptable level of risk.

To do that, financial decision-makers need data. Apache Hadoop helps them store new data sources, then process the larger combined dataset for batch, interactive and real-time analysis. More data and better analysis improves bottom-line results.…

The Journey

Almost to the date, two years ago the Apache Hadoop community voted to make YARN a sub-project of Apache Hadoop followed by the GA release nearly a year ago last fall.

Since then, it’s becoming plainly obvious that Apache Hadoop 2.x, powered by YARN as its architectural center, is the best platform for workloads such as Apache Hadoop MapReduce, Apache Pig, Apache Hive etc., which were designed to process data on Apache Hadoop HDFS.…

Go to page:12345...102030...Last »