Home Forums HDP on Linux – Installation Failing during Deploy

This topic contains 18 replies, has 2 voices, and was last updated by  Dave 10 months, 1 week ago.

  • Creator
    Topic
  • #40871

    I am failing at the deploy step when trying to install Hadoop 1.3.2 using Ambari on Linux 5.8

    warning: Scope(Hdp::Configfile[/etc/hbase/conf/hbase-env.sh]): Could not look up qualified variable ‘::hdp-hadoop::params::conf_dir’; class ::hdp-hadoop::params has not been evaluated

    Can anyone please help?

Viewing 18 replies - 1 through 18 (of 18 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #41621

    Dave
    Moderator

    Hi,

    NTP is being discussed in another thread: http://hortonworks.com/community/forums/topic/error-ntpd-not-running/

    localhost can be used in the ambari-agent.ini on the host which is local to the ambari-server

    The error at the bottom is due to /var/log/hadoop/mapred already exists on the host: if21t02ha.newyorklife.com

    Thanks

    Dave

    Collapse
    #41620

    Yes Dave, I have only 1 ambari-server installed and its on server2 and
    I have ambari agent installed on server1 and server2 , both the ambari-agent .ini file pointing to server2 where the ambari-server is.

    I will open up a new thread for this problem
    Thanks

    Collapse
    #41554

    Dave
    Moderator

    Hi,

    You should only have 1 instance of ambari-server running and an ambari-agent (only) on each other Node (including the server node)
    Please confirm the above.
    Also, did you see Jeff’s comment on your other post about ntp?
    It would be useful to keep issues separate instead of having 1 thread with multiple issues – as the initial issue in this topic was resolved.

    Thanks

    Dave

    Collapse
    #41392

    Dave,

    I think I found something about the ntp not running warning in the install wizard, which is very strange,
    When I give hostname as localhost on the ambari-agent.ini (under /etc/ambari-agent/conf) on server1 it does not complain,
    But this is wrong as I need to pass the hostname of the ambari server which is server 2.
    when I give the hostname of server 2 where the ambari-server is running it complains, that ntp is not running.
    Also I tried giving the fqdn as well as just the hostname of server2 on ini file in server 1. The hosts registers fine but complains that ntp is not running.

    Any idea why ?
    Also so far I had given the local host as the hostname on the ini file in server 2 as thats where the ambari server is.. do I need to specifically mention its hostname as well.. although its local for server 2?

    Also I am completley lost on the below error I sent earlier, on my previous attempt to deploy.
    Any insight on both of these issues will be really helpful

    Duplicate definition: Hdp::Directory_recursive_create[/var/log/hadoop/mapred] is already defined in file /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/jobtracker.pp at line 95; cannot redefine at /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/service.pp:87 on node if21t02ha.newyorklife.com

    Thanks

    Collapse
    #41359

    The deploy failed again with the same error..even after I did all the clean up.. Not sure what this means,
    1. I am planning to uninstall the agent as well and re-install the agent and try again (ofcourse after a full clean up)
    2. I am also planning to get rid of Java JDK 1.7 and install 1.6 before I retry.

    But any insight as to why I am getting this error ?
    Duplicate definition: Hdp::Directory_recursive_create[/var/log/hadoop/mapred] is already defined in file /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/jobtracker.pp at line 95; cannot redefine at /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/service.pp:87 on node if21t02ha.newyorklife.com

    Thanks

    Collapse
    #41328

    Dave
    Moderator

    Hi,

    Yes you can ignore this warning and the software will still install.

    Thanks

    Dave

    Collapse
    #41325

    Dave,
    I have cleaned up everything (hopefully ) and rebooted the servers..and trying to run the installer again
    It keeps complaining that the ntpd service is not running on server 2 (last it complained on both the servers.. now I see it complaining about only server2)
    But I see it running on both the servers..Any reason why the installer is complaining ?
    Should I ignore this warning and proceed.. if not how can I fix it?

    sudo /etc/init.d/ntpd status
    ntpd (pid 3979) is running…

    ntptime
    ntp_gettime() returns code 0 (OK)
    time d6110f84.e0582000 Tue, Oct 22 2013 10:51:48.876, (.876345),
    maximum error 136912 us, estimated error 9096 us
    ntp_adjtime() returns code 0 (OK)
    modes 0×0 (),
    offset -1621.000 us, frequency 8.769 ppm, interval 1 s,
    maximum error 136912 us, estimated error 9096 us,
    status 0×1 (PLL),
    time constant 3, precision 1.000 us, tolerance 512 ppm,

    Warning from installer

    The following services should be up
    Service
    ntpd Not running on 1 host

    Collapse
    #41095

    Dave
    Moderator

    Hi,

    I would just uninstall snappy too.

    Thanks

    Dave

    Collapse
    #41094

    Dave,
    Do I need to remove snappy.. I am planning to keep the Amabri server and agent intact.. so not sure if I need to remove snappy to re-deploy the cluster.

    sudo rpm -qa |grep snappy
    snappy-1.0.5-1.el5
    snappy-devel-1.0.5-1.el5
    snappy-1.0.5-1.el5
    snappy-devel-1.0.5-1.el5

    Collapse
    #41087

    Dave
    Moderator

    Hi,

    You can leave MySQL installed if you wish.
    If you want to remove it, then you can run ‘yum erase mysql’

    Thanks

    Dave

    Collapse
    #40962

    Dave , I am almost done with all your instructions..
    Only 2 questions I have … do I need to remove snappy and mqsql ?

    If so.. I know how to remove snappy ? How can I remove mysql if I have to.
    Thanks.

    Collapse
    #40940

    Thanks Dave,
    I am cleaning one by one.. as your instructions.
    HW Documents asks me to remove snappy as a part of HDP uninstall..
    I am afraid to remove snappy as I see it under ambari-agent and I dont want to remove/disturb ambari-agent
    Do I need to remove snappy ?

    I see snappy under

    /var/lib/ambari-agent/puppet/modules/hdp/manifests/snappy

    sudo rpm -qa |grep snappy
    snappy-1.0.5-1.el5
    snappy-devel-1.0.5-1.el5
    snappy-1.0.5-1.el5
    snappy-devel-1.0.5-1.el5

    Collapse
    #40895

    Dave
    Moderator

    Hi,

    Ambari server & agents should be fine, but the other stuff & directories will need to be removed etc.

    Thanks

    Dave

    Collapse
    #40894

    I see ambari in your list,
    Do I have Uninstall /remove ambari server and agent too ?

    Because I installed the agents manually before I started this Install Wizard..
    Please let me know

    I really appreciate help on this Dave! Thank you!!

    Collapse
    #40893

    Dave
    Moderator

    Hi,

    You need to remove all the directories, users etc which are created on a failed install.
    The steps are:
    (On each NODE)

    1. To remove all of the installed RPMs, run the following commands on each node in the cluster:
    yum erase `yum list | grep @HDP-1 | awk ‘{ print $1 }’`
    yum erase `yum list | grep @HDP-UTILS | awk ‘{ print $1 }’`
    yum erase `yum list | grep ambari | awk ‘{ print $1 }’`

    2. To remove all remaining configuration files, run the following command:
    rm -rf /etc/hbase /etc/templeton /etc/hive /etc/oozie /etc/nagios /etc/hadoop /etc/hcatalog /etc/sqoop /etc/ganglia /etc/zookeeper /etc/ambari*

    3. To remove all users, run the following command:
    for i in [mapred hdfs hive zookeeper hbase oozie sqoop ambari-qa hadoop_deploy templeton]; do userdel -rf $i; done

    4. To remove all of the remaining log locations, run the following commands:
    cd /var/log

    rm -rf hadoop hbase hive hmc nagios oozie zookeeper templeton

    5. To remove all /var/run locations, run the following commands:
    cd /var/run

    rm -rf ganglia hadoop hmc templeton zookeeper webhcat

    6. To remove all data directories used by the previous installation, run the following commands:

    rm -rf /hadoop/hdfs/*
    rm -rf /hadoop/mapred/*

    7. Reboot all nodes.

    Thanks

    Dave

    Collapse
    #40890

    This error is when failing to install Job Tracker on server2 where the Ambari server is

    Duplicate definition: Hdp::Directory_recursive_create[/var/log/hadoop/mapred] is already defined in file /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/jobtracker.pp at line 95; cannot redefine at /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/service.pp:87 on node if21t02ha.newyorklife.com

    Collapse
    #40889

    This is the same server Dave, As you know I have server 1 and 2
    Server 2 has the ambari installed and I am trying to deploy the cluster using ambari

    First time I encountered some issues, so I had the base repo configured and re-ran the install.. which asked to unistall the packages which was installed the first time, so I Uninstalled everything and tried again, thats when I got the above error.
    I then realized that before I retry the install I should I reset the ambari-server.
    So I uninstalled all the 10 packages that installed during the 2nd attempt , reset the amabari server and then tried again.
    I dont get the previous error anymore. But I am getting the below one.. Not sure what it means.

    Duplicate definition: Hdp::Directory_recursive_create[/var/log/hadoop/mapred] is already defined in file /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/jobtracker.pp at line 95; cannot redefine at /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/service.pp:87 on node if21t02ha.newyorklife.com

    Collapse
    #40883

    Dave
    Moderator

    Hi,

    Is this on the same server which you have other posts about or is this on a clean server install?

    Thanks

    Dave

    Collapse
Viewing 18 replies - 1 through 18 (of 18 total)