Apache Solr

Rapid indexing & search on Hadoop

Apache Solr is the open source platform for searches of data stored in HDFS in Hadoop. Solr powers the search and navigation features of many of the world’s largest Internet sites, enabling powerful full-text search and near real-time indexing. Whether users search for tabular, text, geo-location or sensor data in Hadoop, they find it quickly with Apache Solr.

What Solr Does

Hadoop operators put documents in Apache Solr by “indexing” via XML, JSON, CSV or binary over HTTP.

Then users can query those petabytes of data via HTTP GET. They can receive XML, JSON, CSV or binary results. Apache Solr is optimized for high volume web traffic.

Top features include:

  • Advanced full-text search
  • Near real-time indexing
  • Standards-based open interfaces like XML, JSON and HTTP
  • Comprehensive HTML administration interfaces
  • Server statistics exposed over JMX for monitoring
  • Linearly scalable, auto index replication, auto failover and recovery
  • Flexible and adaptable, with XML configuration

Solr is highly reliable, scalable and fault tolerant. Both data analysts and developers in the open source community trust Solr’s distributed indexing, replication and load-balanced querying capabilities.

How Solr Works

Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Jetty. Solr uses the Apache Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use with many programming languages.

Solr’s powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required.

Apache Solr includes a deployment methodology to set up a cluster of Solr servers that combines fault tolerance and high availability. This is referred to as SolrCloud. SolrCloud provides distributed indexing and search capabilities, and provides automated failover for queries in the event of any failure to a SolrCloud server.

SolrCloud utilizes Apache ZooKeeper for cluster coordination and configuration.

Try these Tutorials

Try Solr with Sandbox

Hortonworks Sandbox is a self-contained virtual machine with HDP running alongside a set of hands-on, step-by-step Hadoop tutorials.

Get Sandbox

Resources

More posts on:
Integrate with existing systems
Hortonworks maintains and works with an extensive partner ecosystem from broad enterprise platform vendors to specialized solutions and systems integrators.
HDP 2.1 Webinar Series
Join us for a series of talks on some of the new enterprise functionality available in HDP 2.1 including data governance, security, operations and data access :
Contact Us
Hortonworks provides enterprise-grade support, services and training. Discuss how to leverage Hadoop in your business with our sales team.