The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

Hbase and region server

  • #43654

    I have a 2 node cluster HDP2 installed thru Ambari on Linux Rhel 5.8
    Server 1 hbase master and region server.
    I have data nodes on both the servers,but I did not install region server on server 2
    Everything is working fine. Replication factor is default 3
    Question is what’s the downside of not having a region server on server 2?
    Can I add the component thru Ambari as I see that option, when doing so, do I need to bring down all the services and then add?
    Any specific configuration changes I need to do after adding the region manager or will Ambari take care of it.?

    Please advice

  • Author
  • #44082
    Kenny Zhang


    The downside is you are not fully utilizing the computing resource within your cluster for hbase, and all the regions now are stored in the server 1.
    Yes, you can add component thru Ambari. You don’t need to bring down all the services. Ambari will populate the settings to the server 2 as server 1 for the region server.



    Thanks Kenny, I added the region server to the server 2 thru Ambari and everything is good.
    Thanks again.

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.