Home Forums HDP on Linux – Installation Delete Cluster

This topic contains 10 replies, has 5 voices, and was last updated by  Larry Liu 1 year, 5 months ago.

  • Creator
    Topic
  • #15605

    I’ve installed a cluster using Ambari 1.2.1. It’s not a happy experience since basic functions such as the name node fails. I’d like to start over on the install but I can’t figure out how to delete the cluster in Ambari? Last time, I simply wiped each node installing a new OS image and then Ambari. Surely, there’s a better way?

Viewing 10 replies - 1 through 10 (of 10 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #16110

    Larry Liu
    Moderator

    Hi, Su

    The delete service from REST is disabled on purpose. Please try Sasha’s steps below.

    1. ambari-server stop
    2. ambari-server reset
    3. ambari-server start

    Thanks

    Larry

    Collapse
    #16095

    m
    Member

    curl -i -X DELETE http://localhost:8080/api/v1/users/abc –user admin:admin

    Hi I tried to delete a cluster using REST.Although the wadl says that DELETE nethod is available it gives 500 and doesnt delete the cluster.

    -Su

    Collapse
    #16059

    Robert
    Participant

    Hi Gunnar,
    One more thing, feel free to also log bugs and enhancements you come across here as well:

    https://issues.apache.org/jira/browse/AMBARI

    Regards,
    Robert

    Collapse
    #16058

    Robert
    Participant

    Hi Gunnar,
    Appreciate the feedback. We will try to continue and improve the experience in future releases based on the feedback you provided. Thank you for continuing to use HDP.

    Regards,
    Robert

    Collapse
    #16056

    The ambari-server reset experience is interesting to say the least.

    After reset, it’s possible to restart the provisioning process as expected. Then, the UI gives warnings about how directories and software is already installed providing an option to rerun the checks. So, you get the impression that you should clean up, for example, the /etc/ directories. Don’t do that — you’ll messes up the whole thing causing puppet will complain that the directories aren’t in place. Likely, ambari-server reset doesn’t clean out the puppet database?

    In the end, I simply reprovisioned the OS images on every server from scratch and then restarted the Ambari install and setup. I still can’t get scp to be happy with the ssh-keygen file so I set up ~/.ssh/authorized_keys on each node to allow any-to-any password-less ssh operations. I find that this works better than what’s described in the Ambari Installation Guide. Also, I find that the following sequence works best when setting up password-less ssh (the guide is a bit terse on this subject):

    chmod 755 ~/.ssh
    chmod 600 ~/.ssh/authorized_keys
    cd ~/.ssh
    chmod 700 .. (that’s two periods)

    In addition, you might have to deal with firewalls and proxies for yum. I do the following:

    /etc/yum.conf: proxy=
    /etc/yum.conf: http_caching=none (you might not need this one but our environment does better with it)

    I also add the following to root’s .bashrc file:

    export http_proxy=

    Another tip is to ensure that your yum configuration is clean. I do this via yum update addressing anything it complains about before trying to install Ambari. This might be an artifact of the images used in our environment but I included this tips in case you hit similar issues as I did.

    Anyway, I now have a full installation of Ambari and all Hadoop services provided. It took a good week to make that happen, partly due to our funky development environment (example: our security policies keeps root locked) and partly due to some gotchas. I still have questions about how I add a new service after an initial install using Ambari but that might be a topic for another thread.

    Collapse
    #15714

    Sasha J
    Moderator

    This is probably because oozie is running and some files are locked…
    ambari-server reset assumes that nothing running on the nodes, except ambari-server itself and ambari-agents

    Thank you!
    Sasha

    Collapse
    #15693

    Unfortunately, the oozie install now fails:

    err: /Stage[2]/Hdp-oozie::Service/Hdp-oozie::Service::Exec_user[cd /var/tmp/oozie && /usr/lib/oozie/bin/oozie-setup.sh -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ext.zip ]/Hdp::Exec[exec cd /var/tmp/oozie && /usr/lib/oozie/bin/oozie-setup.sh -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ext.zip ]/Exec[exec cd /var/tmp/oozie && /usr/lib/oozie/bin/oozie-setup.sh -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ext.zip ]/returns: change from notrun to 0 failed: su – oozie -c ‘cd /var/tmp/oozie && /usr/lib/oozie/bin/oozie-setup.sh -hadoop 0.20.200 /usr/lib/hadoop/ -extjs /usr/share/HDP-oozie/ext.zip ‘ returned 255 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:313

    Collapse
    #15676

    Sasha J
    Moderator

    Yes, just use ambari-server reset and you will start from scratch in this case.
    Make sure you remove all the namenode directories on your nodes, otherwise NN may nor start correctly complaining about “already formatted” directory.

    Thank you!
    Sasha

    Collapse
    #15653

    Hi Sasha,

    Thanks for the information, I’ll try ambari-server reset. As I wrote, I installed 1.2.1 from scratch so I’m using that version.

    FYI, I run a couple of Hadoop clusters that I installed without Ambari. These clusters have been rock solid exhibiting no stability problems. (1.0.3 based.)

    For some reason, the scp command doesn’t like the ssh-keys I use to run root as with password-less ssh access among nodes. So, I’ve installed the ambari-agent manually, which runs fine.

    With an Ambari-based cluster, I find that the namenode is having problems even starting and that I simply have to fall back to the log files to understand what’s going on — Ambari log access requires the namenode to be running.

    At this point, I decided to reinitialize the HDFS filesystem finding a number of security issues. I fixed those but am now hitting (from the namenode.log file):

    2013-02-21 23:49:52,193 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting attribute DiagnosticOptions of com.sun.management:type=HotSpotDiagnostic threw an exception
    javax.management.RuntimeErrorException: java.lang.InternalError: Unsupported VMGlobal Type 7
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:879)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:890)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:687)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:672)
    at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:252)
    at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:228)

    So, this feels like a good time to start over from scratch. :)

    Collapse
    #15642

    Sasha J
    Moderator

    Gunnar,
    could you please be more definitive on your negative experience?
    As of the wiping out the cluster from Ambari, do the following:

    1. ambari-server stop
    2. ambari-server reset
    3. ambari-server start

    command “reset” wiping out all cluster metadata from the internal database and you can start from scratch.

    Also, consider upgrade to Ambari 1.2.1, it have a whole lot of bugs fixed.
    check the following document:

    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.1/bk_using_Ambari_book/content/ambari-chap7.html

    When you complete this, do “ambari-server reset”, start the server and all agents.
    when login to UI, provide list of the nodes and uncheck “SSH key” checkbox (as you already have all agents installed and running).

    Contact us back if you need more information.

    Thank you!
    Sasha

    Collapse
Viewing 10 replies - 1 through 10 (of 10 total)