HDP on Linux – Installation Forum

Ambari – Problem of registration

  • #49029
    Gwenael Le Barzic

    Hello everyone !

    I contact you because I have currently a problem installing an Hadoop cluster with Ambari.
    I would like to have the following architecture :
    – 1 master
    – 3 slaves
    All these servers are running on CentOS 6.5.

    I have followed this procedure :

    The setup of Ambari is done with the default values (root as user, postgre with standard dbname, user, password).
    The setup of Ambari complete successfully.

    About the Password-Less SSH , on the master, I performed an ssh-keygen and put the public key on each of the three slaves.

    I started Ambari server on the master.
    I connected to the user interface of Ambari.
    – At the “select Stack” step, I chose HDP 2.0..6 and Red Hat 6, CentOS 6, Oracle Linux 6 f for the repository
    – At the Install option step, I put the 4 FQDN of my servers and I added the SSL private key of the master
    – WHen I click on Next, the progression on the four hosts appear, with the status “installing” and then “registering”, and then “failed”

    Here are some information I discovered in the /var/log/ambari-server/ambari-server.log. I modified the hostnames in this message. This message appears for the htree slaves.
    18:18:51,690 INFO [pool-2-thread-1] BSHostStatusCollector:62 – HostList for polling on [<MASTER>, <SLAVE1> <SLAVE2>, <SLAVE3>]
    18:18:54,782 WARN [qtp1447972804-94] nio:651 – javax.net.ssl.SSLHandshakeException: null cert chain
    18:18:54,852 INFO [qtp1447972804-94] CertificateManager:187 – Signing of agent certificate
    18:18:54,853 INFO [qtp1447972804-94] CertificateManager:188 – Verifying passphrase
    18:18:54,857 WARN [qtp1447972804-94] ShellCommandUtil:46 – Command openssl ca -config /var/lib/ambari-server/keys/ca.config -in /var/lib/ambari-server/keys/<SLAVE3>.csr -out /var/lib/ambari-server/keys/<SLAVE3>.crt -batch -passin pass:**** -keyfile /var/lib/ambari-server/keys/ca.key -cert /var/lib/ambari-server/keys/ca.crt was finished with exit code: 1 – an error occurred parsing the command options.

    Furthermore, on the agents log of the different slaves, I get the following type of messages :
    INFO 2014-02-20 18:18:48,166 security.py:56 – Insecure connection to https://master:8441/ failed. Reconnecting using two-way SSL authentication..
    INFO 2014-02-20 18:18:48,166 security.py:184 – Agent certificate not exists, sending sign request
    ERROR 2014-02-20 18:18:48,278 security.py:220 – Certificate signing failed.
    In order to receive a new agent certificate, remove existing certificate file from keys directory. As a workaround you can turn off two-way SSL authentication in server configuration(ambari.properties)
    INFO 2014-02-20 18:18:48,

    I’m wondering when the problem comes from. Does anyone have any idea about this ?

    Best regards.


to create new topics or reply. | New User Registration

  • Author
  • #49041
    Jeff Sposetti

    Can you confirm which version of ambari you are using? “yum info ambari-server” or “ambari-server –version”?

    The latest is Ambari 1.4.4 and ships with two-way SSL off by default, which might be causing your issue.


    Gwenael Le Barzic

    Hello Jeff !

    Thank you for your answer.

    I performed an ‘ambari-server –version’

    And here is my version of Ambari :

    Best regards.


    Gwenael Le Barzic

    Re Jeff.

    Somehow, I have the impression that my problem was coming from the Internet access, probably when it tries to get the packages. My repo file was not well set I think. One of my colleague launched the cntlm service in order to use the internet connection behind our proxy interprise server, and then when we retried the installation, we were able to successfully progress after the step which were failing.

    But now we have another problem at the final step :
    err: /Stage[2]/Hdp-ganglia::Server::Packages/Hdp::Package[rrdtool-python]/Hdp::Package::Process_pkg[rrdtool-python]/Package[python-rrdtool.x86_64]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install python-rrdtool.x86_64' returned 1: Error: Package: rrdtool-1.4.5-1.el6.x86_64 (HDP-UTILS-
    Requires: ruby
    You could try using --skip-broken to work around the problem
    You could try running: rpm -Va --nofiles --nodigest

    I investigated on the forum and found this other topic :

    On the master server, I performed the following commande : rpm -qa | grep ruby
    And I got no line in return.
    But, this topic is pretty old (2012) so what do you think of this ?

    Best regards.


    Jeff Sposetti

    Let’s see what repos are available: On the Ambari Server host, run “yum repolist”. And on one of the cluster host machines, do the same.

    Also, do you see /etc/yum.repos.d/ambari.repo on one the cluster host machines? As well, please provide output of “yum info python-rrdtool” from that machine.

    Gwenael Le Barzic

    Re Jeff.

    I come back here because we finally manage to complete the whole process.

    Here is what we did :
    – We cleaned the installation (ambari-server stop, ambari-server reset, removal of the content of the folder keys)
    – We removed ambari-server from the master and reinstalled the last version with yum (yum install ambari-server)
    – During the installation, we modified all the paths from /boot/efi/hadoop to /hadoop, in order to avoid a problem of permission
    – We granted the user hive the right to connect to the database from the master and the 3 slaves hosts
    – We double checked the node name because it was strangely not well started, provoking a failure in the map-reduce function in hive, so we restarted it.

    I’m going to try to install hue now !

    Best regards.


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.