Home Forums HDP on Linux – Installation Stuck – Final stages of install

This topic contains 25 replies, has 2 voices, and was last updated by  Trae Barlow 1 year, 11 months ago.

  • Creator
    Topic
  • #10871

    Trae Barlow
    Member

    So I’m at the end of the guide here…

    http://hortonworks.com/hdp11-hmc-quick-start-guide/#preparing

    At the part where I supply my rsa key and hosts files.
    NOTE: I am accessing the server from a Windows workstation.

    I sent over my $USER/.ssh/rsa_id file and /etc/hosts files from the server that HCP is installed on. Perhaps I’m way out of line here (i suspect the hosts file must be of a specific layout) but I’m failing to find a guide/template for how the hosts file should be laid out (other than “host x.x.x.x”).

Viewing 25 replies - 1 through 25 (of 25 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #11083

    Trae Barlow
    Member

    Special thanks to Sasha J.

    Your support was crucial in developing this install solution.

    Collapse
    #11079

    Trae Barlow
    Member

    Going through and looking at all of the services in the Service Manager; they are all running. Everything looks a-okay.

    Collapse
    #11078

    Trae Barlow
    Member

    A link to my method of install is in this thread.

    http://hortonworks.com/community/forums/topic/my-method-for-successful-complete-install/#post-11075

    I made a new thread so as to keep things clean/clear for anyone who might want to use my method of doing things.

    Collapse
    #11077

    Trae Barlow
    Member

    WIN!

    Your cluster is ready!
    Note: You need to restart HMC as Nagios/Ganglia are co-hosted on this server.
    Please restart HMC using “service hmc restart”.

    Collapse
    #11076

    Trae Barlow
    Member

    it’s now starting the dashboard;
    after that only ganglia and nagios left. =D

    Collapse
    #11073

    Trae Barlow
    Member

    yum install yum-downloadonly

    yum –downloadonly install hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 hdp_mon_ganglia_addons snappy snappy-devel lzo lzo.i386 lzo-devel lzo-devel.i386 hadoop-secondarynamenode.x86_64

    yum –downloadonly install hmc hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo zookeeper hbase mysql-server hive mysql-connector-java hive hcatalog oozie extjs-2.2-1 oozie-client pig sqoop mysql-connector-java templeton templeton-tar-pig templeton-tar-hive templeton hdp_mon_dashboard hdp_mon_nagios_addons nagios nagios-plugins fping net-snmp-utils ganglia-gmetad ganglia-gmond gweb hdp_mon_ganglia_addons ganglia-gmond gweb hdp_mon_ganglia_addons snappy snappy-devel

    Now all HMC has to do is install the packages and dependencies, instead of deploying them. In this manner we should avoid any dependency failures.

    It’s looking like it’s working. Cluster and HDFS start/test passed. It’s now doing MapreduceStart. But all looks good. Do note that I did a FULL install with WebHDFS and LZO Compression enabled.

    Collapse
    #11072

    Trae Barlow
    Member

    “\”Mon Oct 15 22:43:00 -0500 2012 /Stage[26]/Hdp-ganglia::Monitor::Config-gen/Anchor[hdp-ganglia::monitor::config-gen::end] (notice): Dependency Exec[yum install $pre_installed_pkgs] has failures: true\””,

    Failed dependencies again =/

    Well, as I mentiond I’m going to be trying yum –downloadonly

    Collapse
    #11070

    Trae Barlow
    Member

    Personally I would LOVE to see some kind of ‘loader script’ that would go through and download all the dependencies, and whenever there was some conflict or package preventing one from installing, that it would ask you if you would like to uninstall it and then install the one hortonworks needed.

    Then you could enter the deploy phase near 100% sure it wasn’t going to ‘puppet kick fail’.

    Just an idea/suggestion.

    Collapse
    #11069

    Trae Barlow
    Member

    I say that because I perceive my problem to be what is commonly known as “dependency hell” && It seems from explanations on here that HMC itself knows which RPMs to download to avoid the issue.

    Collapse
    #11068

    Trae Barlow
    Member

    If this deploy doesn’t work, I have a creative solution next time around that involves the “downloadonly” yum plugin.

    The idea is that I can download all dependencies, and when HMC is doing it’s deploy kick, that it won’t timeout as all it has to do is install the files.

    Collapse
    #11067

    Trae Barlow
    Member

    Originally by Sasha J

    yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 hdp_mon_ganglia_addons snappy snappy-devel lzo lzo.i386 lzo-devel lzo-devel.i386 hadoop-secondarynamenode.x86_64

    ———————–

    There is a conflict between hadoop-sbin.x86_64 and hadoop-sbin.i386

    CentOS 6.3, considering it appears we’re talking about 32 and 64 bit versions of the same thing, shouldn’t be an issue.(?)

    Collapse
    #10955

    tedr
    Member

    Thanks for letting us know that you figured this out.

    Collapse
    #10931

    Trae Barlow
    Member

    I beleive I have found my problem. The yum install command mentiond in this thread is from the ‘CentOS 6 Tips’ thread, and has landed me in what is often called ‘dependency hell’. Staying in line with official support/moderators, I’m now doing a re-install and changing my yum install command to….

    yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 hdp_mon_ganglia_addons snappy snappy-devel lzo lzo.i386 lzo-devel lzo-devel.i386 hadoop-secondarynamenode.x86_64

    as Sasha J has mentions in this thread.

    http://hortonworks.com/community/forums/topic/adding-additional-services-after-the-cluster-has-been-setup/

    Now that I think about it, I believe that was what I used in my original success in my original install without optional components.

    Collapse
    #10909

    Trae Barlow
    Member

    Well that RepoForge idea was COMPLETE fail.

    Anyway, back to the original plan………….

    “\”Mon Oct 15 08:04:41 -0500 2012 /Stage[13]/Hdp-ganglia::Config/Hdp-ganglia::Config::Shell_file[setupGanglia.sh]/File[/usr/libexec/hdp/ganglia/setupGanglia.sh] (notice): Dependency Package[nagios-3.2.3] has failures: true\””,

    Collapse
    #10908

    Trae Barlow
    Member

    I want to be clear,
    #DISCLAIMER#
    this is a CentOS 6.3 install, so this is non-standard non-supported; and probably a huge PITA.

    Results may vary.

    Collapse
    #10907

    Trae Barlow
    Member

    So my current plan, is to go through the ganglia.py file, and figure out which packages I need to get from which repo in order for ganglia.py to be ‘satisfied’, so that puppet will quit timing out re-spamming yum install commands.

    Collapse
    #10906

    Trae Barlow
    Member

    The problem I’m having now is with Ganglia.

    During the deployment of hortonworks, it tries to install ganglia which produces the error….

    Transaction Check Error:
    file /usr/lib64/ganglia/modcpu.so from install of ganglia-3.1.7-6.el6.x86_64 conflicts with file from package ganglia-gmond-3.2.0-99.x86_64

    This is due to the HDP package ganglia-gmond being incompatable with the ganglia package in the EPEL repository.

    I’m not sure how I got it to work yesterday, but I’m pretty sure it was by using the rpmforge repository. Unfortunatly I didn’t install hbase, or I wouldn’t be having issues today.

    Collapse
    #10905

    Trae Barlow
    Member

    My install procedure. Am I missing anything?


    rpm -Uvh http://public-repo-1.hortonworks.com/HDP-1.1.1.16/repos/centos6/hdp-release-1.1.1.16-1.el6.noarch.rpm
    yum install epel-release

    yum install nano
    nano /etc/sysconfig/selinux
    SELINUX=disabled

    nano /etc/hosts
    127.0.0.1 FQDN
    exit
    #LOG BACK IN
    hostname -f
    #SHOULD REPORT FQDN

    yum install ntp
    ntpdate pool.ntp.org
    chkconfig ntpd on
    /etc/init.d/ntpd start

    chkconfig iptables off
    /etc/init.d/iptables stop

    yum install openssh-clients
    ssh-keygen
    ssh-copy-id -i /root/.ssh/id_rsa root@FQDN
    ssh root@FQDN
    #SHOULD LOG IN WITHOUT PASSWORD
    exit

    yum update
    shutdown -r now

    yum install php-pecl-json

    yum install hmc hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo zookeeper hbase mysql-server hive mysql-connector-java hive hcatalog oozie extjs-2.2-1 oozie-client pig sqoop mysql-connector-java templeton templeton-tar-pig templeton-tar-hive templeton hdp_mon_dashboard hdp_mon_nagios_addons nagios nagios-plugins fping net-snmp-utils ganglia-gmetad ganglia-gmond gweb hdp_mon_ganglia_addons ganglia-gmond gweb hdp_mon_ganglia_addons snappy snappy-devel

    /etc/init.d/hmc start

    #ACCEPT LICENSE / DOWNLOAD JAVA

    shutdown now

    #MAKE COPY OF VIRTUAL SERVER DISK/FILES AS A METHOD OF RESTORING SERVER TO THIS POINT
    #START SERVER

    /etc/init.d/hmc start

    #ASSOCIATE FQDN WITH SERVER IP ON WORKSTATION
    #VISIT FQDN ON WORKSTATION'S WEB BROWSER

    #FILES NEEDED ON WORKSTATION ACCESSING SERVER
    use winscp to get /root/.ssh/id_rsa
    #HOSTS FILE (FILENAME: FQDN) FOR WEBINTERFACE
    FQDN

    Collapse
    #10878

    Trae Barlow
    Member

    That said, i did use my /etc/hosts file to achieve that.

    I’m curious, did the the install document mean to say that I need to setup a DNS server?

    Collapse
    #10877

    Trae Barlow
    Member

    Well I am stumped
    It appears that Puppet was the reason for the failure
    “puppet kick failed 0 of 1 nodes”

    That said,
    YES
    hostname -f reported my fully qualified domain name (server.local).

    Collapse
    #10876

    Trae Barlow
    Member

    It appears as said in this thread…

    http://hortonworks.com/community/forums/topic/puppet-kick-failed/

    that

    yum is still updating, which is understandable as I only have a 3mbit connection.

    Collapse
    #10875

    Trae Barlow
    Member

    Looking in the log file (still being written to) it’s something to do with Puppet that’s taking so so long. It’s on kick try 2/3, at about 120s a kick try before timeout i’m sure something will happen soon.

    Collapse
    #10874

    Trae Barlow
    Member

    “cluster installing” for about 15m+ now.

    df -h on the server is showing 2% of a 100gb partition being used. Id imagine it would fill up more than that in 15 minutes.

    Appears to be dead. I’m afraid to shut it down.

    Collapse
    #10873

    Trae Barlow
    Member

    I’m now at the “deployment” phase (cluster installing for 5 min+)
    Assuming all goes well, what libraries would I use for developing cluster/database apps in C/C++/Java?

    That is after all the whole point of my crappy VMWare server/’cluster’. Assuming I can get to writing some basic classes/functions I’ll be setting up some ‘job nodes’ on old hardware/laptops to start testing distributed computing.

    The end-idea is to get a server rack and load r up.

    If anyone is wondering I’m developing a server engine for a Crysis 3 mod.

    Collapse
    #10872

    Trae Barlow
    Member

    Okay, well I’m almost there. I’ve found that the format for the “hosts” file is.
    host.name

    NOT
    ip.addy.x.x hostname

    Now it’s just whining about the public key. But almost there anyway, that (probably) isn’t something too difficult.

    Collapse
Viewing 25 replies - 1 through 25 (of 25 total)