Yet another registration failure / SSLError thread

to create new topics or reply. | New User Registration

This topic contains 10 replies, has 5 voices, and was last updated by  tedr 1 year, 8 months ago.

  • Creator
  • #23405

    Hello everyone,

    I too am having issues with node registration using ambari:
    (5 node cluster)

    (‘INFO 2013-04-26 11:13:33,922 – SSL Connect being called.. connecting to the server
    INFO 2013-04-26 11:13:33,984 – Unable to connect to: https://hadoopie.:8441/agent/v1/register/hdp2a-d3.
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 88, in registerWithServer
    response = self.sendRequest(self.registerUrl, data)
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 237, in sendRequest
    self.cachedconnect = security.CachedHTTPSConnection(self.config)
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 77, in __init__
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 82, in connect
    File “/usr/lib/python2.6/site-packages/ambari_agent/”, line 66, in connect
    File “/usr/lib64/python2.6/”, line 338, in wrap_socket
    File “/usr/lib64/python2.6/”, line 120, in __init__
    File “/usr/lib64/python2.6/”, line 279, in do_handshake
    SSLError: [Errno 8] _ssl.c:490: EOF occurred in violation of protocol
    ‘, None)

    11:17:01,192 WARN nio:651 – General SSLEngine problem
    11:17:07,232 WARN nio:651 – General SSLEngine problem

    FQDNS are being used, /etc/hosts are the same across all nodes.
    NTP is configured and nodes are in sync + ambari-server.
    All nodes are built the same, ssh keys are set up for passphraseless logon, pdsh is working.
    Searched the forums, did not find any obvious mistakes or solutions.

    [root@hadoopie ~]# date;pdsh -a date
    Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d3: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d2: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-d1: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-n1: Fri Apr 26 11:22:53 EDT 2013
    hdp2a-n2: Fri Apr 26 11:22:53 EDT 2013

    Running on: CentOS release 6.3 (Final) – 2.6.32-279.el6.x86_64

    I ran ambari-server with java debugging for SSL, saw this in the logs:

    qtp1302313510-33, fatal error: 46: General SSLEngine problem PKIX path validation failed: Path does not chain with any of the trust anchors
    %% Invalidated: [Session-1, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA]
    qtp1302313510-33, SEND TLSv1 ALERT: fatal, description = certificate_unknown
    qtp1302313510-33, WRITE: TLSv1 Alert, length = 2
    qtp1302313510-33, fatal: engine already closed. Rethrowing General SSLEngine problem

    I would be grateful for any assistance as I am demoing HDP.

Viewing 10 replies - 1 through 10 (of 10 total)

You must be to reply to this topic. | Create Account

  • Author
  • #29641


    Hi Kalyan,

    Glad to hear that you have your cluster working now!



    finally… got it all up and running… few other tweaks done include
    – Zookeeper quorum checks (configuration)
    – Start ZK before HBase (ensure ZK use is multi set to true – hbase-site.xml)
    – To start HBase, did it via backend (refer to manual steps for guidance)
    – For Hive to get working, had to manually install mysql-connector-java RPM and its dependencies by using –nodeps flag; for JDK installed Oracle distribution of JDK 1.6 u31. For simplicity added in /etc/profile.d to set JAVA_HOME and java to PATH.

    Will be glad to share more…!


    flush … a.k.a., get rid of them.. reinstall ambari-server, (left postgresql as-is)… but after reinstalling ambari, did a ambari-server reset… right now struggling on the install process. but few retries may let me go forward! else, will have to fall back to the option of pre-installing the libraries and do a skim through ambari setup where puppet will not throw up!


    flush the /var/lib/ambari-server/keys… this is something I’ve tried besides solving NTP issue! then restarted the nodes..


    As was suspecting this issue got resolved when NTP settings were updated. Approach used… Namenode host will sync with public domain, set fudge level to 10 on namenode, point rest of the nodes to Namenode instead of public servers. This let all the nodes to stay on course – same time!



    Hi Kaylan,

    What is the output of the following commands:
    java -version
    cat /etc/hosts



    Me too! But without any JDK related… here is the trail…like to check if any one else experienced it, if so any solution…

    ERROR: ambari-agent start failed for unknown reason
    (‘ raise error, msg
    error: [Errno 111] Connection refused
    INFO 2013-07-17 14:23:18,498 – Registering with the server \'{“timestamp”: 1374096196277, “hostname”: “”, “responseId”: -1, “publicHostname”: “”, “hardwareProfile”: {“ipaddress_lo”: “”, “memoryfree”: 73284976, “memorytotal”: 74239180, “swapfree”: “511.99 MB”, “processorcount”: “8”, “operatingsystem”: “CentOS”, “netmask_lo”: “”, “ps”: “ps -ef”, “rubyversion”: “1.8.7”, “kernelrelease”: “2.6.32-279.el6.x86_64″, “facterversion”: “1.6.10”, “is_virtual”: false, “network_lo”: “”, “selinux”: “false”, “type”: “Rack Mount Chassis”, “rubysitedir”: “/usr/lib/ambari-agent/lib/ruby-1.8.7-p370/lib/ruby/site_ruby/1.8″, “kernelversion”: “2.6.32”, “memorysize”: 74239180, “swapsize”: “511.99 MB”, “netmask”: “”, “operatingsystemrelease”: “6.3”, “uniqueid”: “140a6265″, “kernelmajversion”: “2.6”, “macaddress”: “C8:0A:A9:88:21:76″, “boardserialnumber”: “To be filled by O.E.M.”, “uptime_seconds”: “9871”, “network_eth0″: “”, “uptime_hours”: “2”, “productname”: “CS24-TY”, “architecture”: “x86_64″, “netmask_eth0″: “”, “mounts”: [{“available”: “16757788”, “used”: “2836064”, “percent”: “15%”, “device”: “/dev/mapper/vg_hdpnn04-lv_root”, “mountpoint”: “/”, “type”: “ext4″, “size”: “20642428”}, {“available”: “37118248”, “used”: “0”, “percent”: “0%”, “device”: “tmpfs”, “mountpoint”: “/dev/shm”, “type”: “tmpfs”, “size”: “37118248”}, {“available”: “451588”, “used”: “38240”, “percent”: “8%”, “device”: “/dev/sda1″, “mountpoint”: “/boot”, “type”: “ext4″, “size”: “516040”}, {“available”: “227909136”, “used”: “191636”, “percent”: “1%”, “device”: “/dev/sdb1″, “mountpoint”: “/hdata”, “type”: “ext4″, “size”: “240307720”}, {“available”: “7454604”, “used”: “382920”, “percent”: “5%”, “device”: “/dev/mapper/vg_hdpnn04-lv_var”, “mountpoint”: “/var”, “type”: “ext4″, “size”: “8256952”}], “lsbrelease”: “:base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch”, “kernel”: “Linux”, “domain”: “”, “uptime_days”: “0”, “serialnumber”: “HN4YJM1″, “timezone”: “PDT”, “hardwareisa”: “x86_64″, “id”: “root”, “uptime”: “2:44 hours”, “boardproductname”: “S99″, “macaddress_eth0″: “C8:0A:A9:88:21:76″, “macaddress_eth1″: “C8:0A:A9:88:21:77″, “hostname”: “hdpsn02″, “lsbdistid”: “CentOS”, “virtual”: “physical”, “boardmanufacturer”: “Dell”, “sshdsakey”: “AAAAB3NzaC1kc3MAAACBAKB8uJiu5NU6M3CMkVMNVfy6Da0m7tg85bs2rEELbe67eLW5C29KoolU2eqNWCx2nqCTs7T0S11kWRRkAsJX/bdA1Hf4rkwgyxllU5SarEe7wKFSuHK7kxZ5YCQRvL4q83/6hK5HAw7hTy6atl/e5xhHYSRq3Vrko9rDn7uU5FD5AAAAFQC5cqcXRK3sIUvSRjVFNZRneHFuMwAAAIAHaZ7XZQh5cnx+dYwFIqMzNuRPp88XtRd5Y4UgJd5ubzMiG0xZt0WJvLSHgODV26N9TaL3XA3IXtOTSevF6pF2/pvBl5i7g8TL


    Seth Lyubich

    Hi Ronald,

    Thanks for letting us know that the issue is now resolved.



    Hello Sasha,

    You hit the nail on the head. Java version I was using was 1.7 Installed supported version of jdk and now install is complete! Thank you Sasha for steering me in the right direction on this!


    Sasha J

    the error is:
    Unable to connect to: https://hadoopie.:8441/agent/v1/register/hdp2a-d3.
    Please, make sure you have firewall disabled.
    Are you using supported version of Java?

    Thank you!

Viewing 10 replies - 1 through 10 (of 10 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.