The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Yet another registration failure / SSLError thread

  • #23405

    Hello everyone,

    I too am having issues with node registration using ambari:
    (5 node cluster)

    (‘INFO 2013-04-26 11:13:33,922 – SSL Connect being called.. connecting to the server
    INFO 2013-04-26 11:13:33,984 – Unable to connect to: https://hadoopie.:8441/agent/v1/register/hdp2a-d3.
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 88, in registerWithServer
    response = self.sendRequest(self.registerUrl, data)
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 237, in sendRequest
    self.cachedconnect = security.CachedHTTPSConnection(self.config)
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 77, in __init__
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 82, in connect
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 66, in connect
    File “/usr/lib64/python2.6/”, line 338, in wrap_socket
    File “/usr/lib64/python2.6/”, line 120, in __init__
    File “/usr/lib64/python2.6/”, line 279, in do_handshake
    SSLError: [Errno 8] _ssl.c:490: EOF occurred in violation of protocol
    ‘, None)

    11:17:01,192 WARN nio:651 – General SSLEngine problem
    11:17:07,232 WARN nio:651 – General SSLEngine problem

    FQDNS are being used, /etc/hosts are the same across all nodes.
    NTP is configured and nodes are in sync + ambari-server.
    All nodes are built the same, ssh keys are set up for passphraseless logon, pdsh is working.
    Searched the forums, did not find any obvious mistakes or solutions.

    [root@hadoopie ~]# date;pdsh -a date
    Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d3: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d2: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d1: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-n1: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-n2: Fri Apr 26 11:22:53 EDT 2013

    Running on: CentOS release 6.3 (Final) – 2.6.32-279.el6.x86_64

    I ran ambari-server with java debugging for SSL, saw this in the logs:

    qtp1302313510-33, fatal error: 46: General SSLEngine problem PKIX path validation failed: Path does not chain with any of the trust anchors
    %% Invalidated: [Session-1, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA]
    qtp1302313510-33, SEND TLSv1 ALERT: fatal, description = certificate_unknown
    qtp1302313510-33, WRITE: TLSv1 Alert, length = 2
    qtp1302313510-33, fatal: engine already closed. Rethrowing General SSLEngine problem

    I would be grateful for any assistance as I am demoing HDP.

  • Author
  • #23407
    Sasha J

    the error is:
    Unable to connect to: https://hadoopie.:8441/agent/v1/register/hdp2a-d3.
    Please, make sure you have firewall disabled.
    Are you using supported version of Java?

    Thank you!


    Hello Sasha,

    You hit the nail on the head. Java version I was using was 1.7 Installed supported version of jdk and now install is complete! Thank you Sasha for steering me in the right direction on this!

    Seth Lyubich

    Hi Ronald,

    Thanks for letting us know that the issue is now resolved.



    Me too! But without any JDK related… here is the trail…like to check if any one else experienced it, if so any solution…

    ERROR: ambari-agent start failed for unknown reason
    (‘ raise error, msg
    error: [Errno 111] Connection refused
    INFO 2013-07-17 14:23:18,498 – Registering with the server \'{“timestamp”: 1374096196277, “hostname”: “”, “responseId”: -1, “publicHostname”: “”, “hardwareProfile”: {“ipaddress_lo”: “”, “memoryfree”: 73284976, “memorytotal”: 74239180, “swapfree”: “511.99 MB”, “processorcount”: “8”, “operatingsystem”: “CentOS”, “netmask_lo”: “”, “ps”: “ps -ef”, “rubyversion”: “1.8.7”, “kernelrelease”: “2.6.32-279.el6.x86_64”, “facterversion”: “1.6.10”, “is_virtual”: false, “network_lo”: “”, “selinux”: “false”, “type”: “Rack Mount Chassis”, “rubysitedir”: “/usr/lib/ambari-agent/lib/ruby-1.8.7-p370/lib/ruby/site_ruby/1.8”, “kernelversion”: “2.6.32”, “memorysize”: 74239180, “swapsize”: “511.99 MB”, “netmask”: “”, “operatingsystemrelease”: “6.3”, “uniqueid”: “140a6265”, “kernelmajversion”: “2.6”, “macaddress”: “C8:0A:A9:88:21:76”, “boardserialnumber”: “To be filled by O.E.M.”, “uptime_seconds”: “9871”, “network_eth0”: “”, “uptime_hours”: “2”, “productname”: “CS24-TY”, “architecture”: “x86_64”, “netmask_eth0”: “”, “mounts”: [{“available”: “16757788”, “used”: “2836064”, “percent”: “15%”, “device”: “/dev/mapper/vg_hdpnn04-lv_root”, “mountpoint”: “/”, “type”: “ext4”, “size”: “20642428”}, {“available”: “37118248”, “used”: “0”, “percent”: “0%”, “device”: “tmpfs”, “mountpoint”: “/dev/shm”, “type”: “tmpfs”, “size”: “37118248”}, {“available”: “451588”, “used”: “38240”, “percent”: “8%”, “device”: “/dev/sda1”, “mountpoint”: “/boot”, “type”: “ext4”, “size”: “516040”}, {“available”: “227909136”, “used”: “191636”, “percent”: “1%”, “device”: “/dev/sdb1”, “mountpoint”: “/hdata”, “type”: “ext4”, “size”: “240307720”}, {“available”: “7454604”, “used”: “382920”, “percent”: “5%”, “device”: “/dev/mapper/vg_hdpnn04-lv_var”, “mountpoint”: “/var”, “type”: “ext4”, “size”: “8256952”}], “lsbrelease”: “:base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch”, “kernel”: “Linux”, “domain”: “”, “uptime_days”: “0”, “serialnumber”: “HN4YJM1”, “timezone”: “PDT”, “hardwareisa”: “x86_64”, “id”: “root”, “uptime”: “2:44 hours”, “boardproductname”: “S99”, “macaddress_eth0”: “C8:0A:A9:88:21:76”, “macaddress_eth1”: “C8:0A:A9:88:21:77”, “hostname”: “hdpsn02”, “lsbdistid”: “CentOS”, “virtual”: “physical”, “boardmanufacturer”: “Dell”, “sshdsakey”: “AAAAB3NzaC1kc3MAAACBAKB8uJiu5NU6M3CMkVMNVfy6Da0m7tg85bs2rEELbe67eLW5C29KoolU2eqNWCx2nqCTs7T0S11kWRRkAsJX/bdA1Hf4rkwgyxllU5SarEe7wKFSuHK7kxZ5YCQRvL4q83/6hK5HAw7hTy6atl/e5xhHYSRq3Vrko9rDn7uU5FD5AAAAFQC5cqcXRK3sIUvSRjVFNZRneHFuMwAAAIAHaZ7XZQh5cnx+dYwFIqMzNuRPp88XtRd5Y4UgJd5ubzMiG0xZt0WJvLSHgODV26N9TaL3XA3IXtOTSevF6pF2/pvBl5i7g8TL


    Hi Kaylan,

    What is the output of the following commands:
    java -version
    cat /etc/hosts



    As was suspecting this issue got resolved when NTP settings were updated. Approach used… Namenode host will sync with public domain, set fudge level to 10 on namenode, point rest of the nodes to Namenode instead of public servers. This let all the nodes to stay on course – same time!


    flush the /var/lib/ambari-server/keys… this is something I’ve tried besides solving NTP issue! then restarted the nodes..


    flush … a.k.a., get rid of them.. reinstall ambari-server, (left postgresql as-is)… but after reinstalling ambari, did a ambari-server reset… right now struggling on the install process. but few retries may let me go forward! else, will have to fall back to the option of pre-installing the libraries and do a skim through ambari setup where puppet will not throw up!


    finally… got it all up and running… few other tweaks done include
    – Zookeeper quorum checks (configuration)
    – Start ZK before HBase (ensure ZK use is multi set to true – hbase-site.xml)
    – To start HBase, did it via backend (refer to manual steps for guidance)
    – For Hive to get working, had to manually install mysql-connector-java RPM and its dependencies by using –nodeps flag; for JDK installed Oracle distribution of JDK 1.6 u31. For simplicity added in /etc/profile.d to set JAVA_HOME and java to PATH.

    Will be glad to share more…!


    Hi Kalyan,

    Glad to hear that you have your cluster working now!


The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.