The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Delete Cluster

  • #15605

    I’ve installed a cluster using Ambari 1.2.1. It’s not a happy experience since basic functions such as the name node fails. I’d like to start over on the install but I can’t figure out how to delete the cluster in Ambari? Last time, I simply wiped each node installing a new OS image and then Ambari. Surely, there’s a better way?

  • Author
  • #15642
    Sasha J

    could you please be more definitive on your negative experience?
    As of the wiping out the cluster from Ambari, do the following:

    1. ambari-server stop
    2. ambari-server reset
    3. ambari-server start

    command “reset” wiping out all cluster metadata from the internal database and you can start from scratch.

    Also, consider upgrade to Ambari 1.2.1, it have a whole lot of bugs fixed.
    check the following document:

    When you complete this, do “ambari-server reset”, start the server and all agents.
    when login to UI, provide list of the nodes and uncheck “SSH key” checkbox (as you already have all agents installed and running).

    Contact us back if you need more information.

    Thank you!


    Hi Sasha,

    Thanks for the information, I’ll try ambari-server reset. As I wrote, I installed 1.2.1 from scratch so I’m using that version.

    FYI, I run a couple of Hadoop clusters that I installed without Ambari. These clusters have been rock solid exhibiting no stability problems. (1.0.3 based.)

    For some reason, the scp command doesn’t like the ssh-keys I use to run root as with password-less ssh access among nodes. So, I’ve installed the ambari-agent manually, which runs fine.

    With an Ambari-based cluster, I find that the namenode is having problems even starting and that I simply have to fall back to the log files to understand what’s going on — Ambari log access requires the namenode to be running.

    At this point, I decided to reinitialize the HDFS filesystem finding a number of security issues. I fixed those but am now hitting (from the namenode.log file):

    2013-02-21 23:49:52,193 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting attribute DiagnosticOptions of threw an exception java.lang.InternalError: Unsupported VMGlobal Type 7
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(
    at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(
    at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(
    at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(

    So, this feels like a good time to start over from scratch. :)

    Sasha J

    Yes, just use ambari-server reset and you will start from scratch in this case.
    Make sure you remove all the namenode directories on your nodes, otherwise NN may nor start correctly complaining about “already formatted” directory.

    Thank you!


    Unfortunately, the oozie install now fails:

    err: /Stage[2]/Hdp-oozie::Service/Hdp-oozie::Service::Exec_user[cd /var/tmp/oozie && /usr/lib/oozie/bin/ -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ ]/Hdp::Exec[exec cd /var/tmp/oozie && /usr/lib/oozie/bin/ -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ ]/Exec[exec cd /var/tmp/oozie && /usr/lib/oozie/bin/ -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ ]/returns: change from notrun to 0 failed: su – oozie -c ‘cd /var/tmp/oozie && /usr/lib/oozie/bin/ -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ ‘ returned 255 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:313

    Sasha J

    This is probably because oozie is running and some files are locked…
    ambari-server reset assumes that nothing running on the nodes, except ambari-server itself and ambari-agents

    Thank you!


    The ambari-server reset experience is interesting to say the least.

    After reset, it’s possible to restart the provisioning process as expected. Then, the UI gives warnings about how directories and software is already installed providing an option to rerun the checks. So, you get the impression that you should clean up, for example, the /etc/ directories. Don’t do that — you’ll messes up the whole thing causing puppet will complain that the directories aren’t in place. Likely, ambari-server reset doesn’t clean out the puppet database?

    In the end, I simply reprovisioned the OS images on every server from scratch and then restarted the Ambari install and setup. I still can’t get scp to be happy with the ssh-keygen file so I set up ~/.ssh/authorized_keys on each node to allow any-to-any password-less ssh operations. I find that this works better than what’s described in the Ambari Installation Guide. Also, I find that the following sequence works best when setting up password-less ssh (the guide is a bit terse on this subject):

    chmod 755 ~/.ssh
    chmod 600 ~/.ssh/authorized_keys
    cd ~/.ssh
    chmod 700 .. (that’s two periods)

    In addition, you might have to deal with firewalls and proxies for yum. I do the following:

    /etc/yum.conf: proxy=
    /etc/yum.conf: http_caching=none (you might not need this one but our environment does better with it)

    I also add the following to root’s .bashrc file:

    export http_proxy=

    Another tip is to ensure that your yum configuration is clean. I do this via yum update addressing anything it complains about before trying to install Ambari. This might be an artifact of the images used in our environment but I included this tips in case you hit similar issues as I did.

    Anyway, I now have a full installation of Ambari and all Hadoop services provided. It took a good week to make that happen, partly due to our funky development environment (example: our security policies keeps root locked) and partly due to some gotchas. I still have questions about how I add a new service after an initial install using Ambari but that might be a topic for another thread.


    Hi Gunnar,
    Appreciate the feedback. We will try to continue and improve the experience in future releases based on the feedback you provided. Thank you for continuing to use HDP.



    Hi Gunnar,
    One more thing, feel free to also log bugs and enhancements you come across here as well:



    curl -i -X DELETE http://localhost:8080/api/v1/users/abc –user admin:admin

    Hi I tried to delete a cluster using REST.Although the wadl says that DELETE nethod is available it gives 500 and doesnt delete the cluster.


    Larry Liu

    Hi, Su

    The delete service from REST is disabled on purpose. Please try Sasha’s steps below.

    1. ambari-server stop
    2. ambari-server reset
    3. ambari-server start



    Michal Bar

    @Sasha J many thanks, looking for this solution 3 days…

    “As of the wiping out the cluster from Ambari, do the following:

    1. ambari-server stop
    2. ambari-server reset
    3. ambari-server start

    command “reset” wiping out all cluster metadata from the internal database and you can start from scratch.”

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.