The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Trying to Uninstall and failing

  • #11294

    I am trying to uninstall HMC to start afresh.
    I got an error about the PUPPET KICK failing..
    The suggestion on the forum was to yum erase hmc and puppet, delete puppet and hmc log files and then reinstall hmc.
    I did all that..
    Now when I try to setup the HMC,
    I get the following error:

    Failed. Reason: Permission denied, please try again.
    Permission denied, please try again.
    Permission denied (publickey,gssapi-with-mic,password).

  • Author
  • #11296
    Sasha J

    Those errors means that you have not setup passwordless SSH connectivity correctly.
    Populate needed files with the correct keys and it should work after this.


    First off, all this setup was done earlier. All I did was the following:
    yum erase puppet hmc
    Then, in the /var/log/ dir, I removed all the puppet and hmc logs

    And then yum install hmc

    Before starting the hmc service, I stop the iptables.

    None of this should have affected by ssh password less setup from before. Or am I wrong in concluding this ?
    Per the suggestion, I have rechecked the setup for a single node installation.

    The password less ssh works on the single node.

    I shutdown the hmc service and restarted it..
    Got a similar error:

    Failed. Reason: Permission denied, please try again.
    Permission denied, please try again.
    Permission denied (publickey,password).

    Sasha J

    Where you see this error?
    Any logs you can share?
    Any changes in IP addresses or names happened since last run?



    Where is the error:
    The error is coming on the webpage for the hmc..
    After I name the cluster and select the password file and the hostname files, and I click on Add Nodes.

    Any logs to share:
    Please let me know what logs you’d want to see.

    Any change in IP addresses or names since the last run:
    No changes.

    I even tried to setup my laptop as a single node cluster.
    I got the same error even though I started from scratch.


    In an attempt to restart from scratch, I did the following:
    yum erase hmc puppet
    I removed all the dependencies also — ruby, nagios, ganglia, mysql
    I then removed some of the directories too — specifically /var/lib/puppet/*

    I then installed everything per the HDP Installation guide.

    When I did
    service hmc start
    I get the following error:

    Starting HMC Installer
    Starting httpd: Syntax error on line 35 of /etc/httpd/conf.d/puppetmaster.conf:
    SSLCertificateFile: file ‘/var/lib/puppet/ssl/certs/np3-centos5-laptop-1.localdomain.pem’ does not exist or is empty
    Failed to start HMC

    There is nothing under the dir /var/lib/puppet/*

    How do I repopulate the directories ?

    Sasha J

    You have to reinstall HMC.
    It installs puppet as dependency and all thise directories populated automatically during the puppet first start (which is done by “hmc start” part).
    It is a good idea if you can reboot your machine after removing HMC and puppet.
    You do not need to remove any other packages.

    Make absolutely sure that your command “ssh np3-centos5-laptop-1” can connect to the node without password entering.


    Thank you for the quick reply.
    I did as you suggested.
    Still the same error:
    This time around more details:

    Cutting and pasting here:
    [root@np3-centos5-laptop-1]# service hmc start
    Do you agree to Oracle’s Java License at
    /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:1:in `require’: no such file to load — optparse (LoadError)
    from /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:1
    from /usr/lib/ruby/site_ruby/1.8/puppet/application/master.rb:1:in `require’
    from /usr/lib/ruby/site_ruby/1.8/puppet/application/master.rb:1
    from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:54:in `require’
    from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:54:in `require_application’
    from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:59:in `execute’
    from /usr/bin/puppet:4
    Starting HMC Installer [FAILED]
    Starting httpd: Syntax error on line 35 of /etc/httpd/conf.d/puppetmaster.conf:
    SSLCertificateFile: file ‘/var/lib/puppet/ssl/certs/np3-centos5-laptop-1.localdomain.pem’ does not exist or is empty
    Failed to start HMC


    Help needed..
    I am still stuck at this issue..

    Sasha J

    I hope you use supported OS…
    So, here is the steps I want you perform:
    1. wipe out your existing system.
    2. make CLEAN OS installation.
    3. Check all the prerequisites and make you sure you meet it all (firewall, SELinux, ssh keys, etc).
    4. install HMC and start it.

    You should follow this document precisely:


    I am using a supported OS – Centos 5
    I was able to get the HMC installation after I uninstalled the cluster and re-installed it.
    I now have a single node cluster setup.. but the HDFS is down.
    How do I start HDFS, and how do I run a Hadoop example ?

    It appears that all the components are down —
    (Cutting and Pasting the HMC Monitoring page..

    Cluster Summary
    HDFS (Down)
    NameNode Uptime
    HDFS Capacity
    DataNodes (live/dead/decom)
    Under Replicated Block Count

    Job Tracker Uptime NaNday NaNhr NaNmin
    Trackers 0 / 0
    Running & Waiting Jobs 0 & 0

    HBase (Down)
    HBase Master Uptime
    Region Servers (live/dead)
    Regions in Transition

    Sasha J

    go to “cluster management” page , to manage services tab.
    click “Start” button to start service.
    or, click “Start all” top start all services.


    Thank you for the quick reply.
    All services had already been started when I installed the HMC.

    I had attended the ‘Administering Hadoop’ course 2 weeks ago.
    I am trying to run one of the benchmark examples from the course.. and I am running into problems.

    First: The HMC Cluster Management — Manage Services page, shows all the services as “Started”
    From here, could you give me a step by step procedure to run a benchmark job ?

    The HDFS is still showing as down in Monitoring webpage even now…

    Would it be possible to take this offline ?

    Sasha J

    I will send you e-mail directly, let us take this offline.
    We will need to setup WebEx and I will walk you trough the procedure.


    Thank you Sasha..

    A quick question:
    I got this error:
    httpd:Could not reliably determine the server’s fully qualified domain name, using for ServerName.
    I had used mPc5cp1 as the FQDN for the machine.
    In the logs I notice that it looked for mpc5cp1
    How can I rectify this ?

    Sasha J

    Let us resolve this tomorrow during WebEx.


    Sasha, Hortonworks Support,
    The Webex call was useful and I was able to resolve the issues I had thus far..

    Thank you for your support !


    I am quite sure though, that I am not done asking questions :)

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.