Home Forums HDP on Linux – Installation ports used for puppet

This topic contains 23 replies, has 3 voices, and was last updated by  Miguel Pereira 2 years, 3 months ago.

  • Creator
    Topic
  • #7701

    I am installing HDP on the Amazon cloud (EC2) and it is hanging. I was able to get past one issue by opening port 8139 which puppet seems to use. Are there any other ports that need to be opened (complete list) which could be causing this issue?

    The details are in hmc.log show it is waiting for results forever without ever erroring out:

    [2012:07:27 01:26:17][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 3 nodes have reported for txn 3-2-0
    [2012:07:27 01:26:22][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 3 nodes have reported for txn 3-2-0
    [2012:07:27 01:26:27][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 3 nodes have reported for txn 3-2-0
    [2012:07:27 01:26:32][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 3 nodes have reported for txn 3-2-0
    [2012:07:27 01:26:37][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 3 nodes have reported for txn 3-2-0

    Any help is appreciated!

Viewing 23 replies - 1 through 23 (of 23 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #7910

    Pre-Deploy Installations
    yum erase rrdtool
    yum install rrdtool-1.2.27-3.el5.x86_64

    http://linuxtoolkit.blogspot.com/2009/12/error-missing-dependency-librrdso264bit.html

    yum erase php*
    yum install mysql-server net-snmp-utils php-pecl-json
    Timeout / repo conflict
    yum install -y … the big one make sure it doesn’t have errors

    Collapse
    #7906

    Hmm looking at this a bit more carefully this morning, mysql-server is provided by the updates repo from CentOS-Base.repo the real issue is conflicting repos so if you can successfully execute the install command with out errors or dependency conflicts when you get to the deployment phase it will be smooth sailing.

    Collapse
    #7904

    vim /etc/yum.repos.d/rightscale.repo
    %s/enabled=1/enabled=0/g

    yum -y erase hmc puppet
    yum -y install hmc
    service hmc restart

    Cool, that should get you past the cluster install step :) and I am going to sleep.

    Collapse
    #7903

    Oh yeah, just because i installed everything doesn’t mean hdp stops executing that yum install command. I guess 2 options, 1 disable my other repos. 2 shut off the yum install command on hdp. Ill try option 1..

    Collapse
    #7902

    Looks like both mysql50 & MySQL-server-community are provided by the rightscale repo..

    – Decided to use repoforge

    wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el5.rf.x86_64.rpm
    rpm –import http://apt.sw.be/RPM-GPG-KEY.dag.txt
    rpm -K rpmforge-release-0.5.2-2.el5.rf.*.rpm
    rpm -i rpmforge-release-0.5.2-2.el5.rf.*.rpm

    yum –disablerep=rightscale install mysql-server
    yum –disablerep=* –enablerep=HDP-1.0.0.12 install

    already installed and latest version
    Nothing to do

    Cool, but still get the same error hehe now what?

    Collapse
    #7901

    No package mysql-server available. hehe so you have to get this from outside the hdp repo..

    Collapse
    #7900

    This yum command contains more packages than the command listed under the help files. Furthermore executing it on the shell yields:
    Package mysql-server is obsoleted by MySQL-server-community, trying to install MySQL-server-community-5.1.55-1.rhel5.x86_64 instead

    mysql50-5.0.96-2.ius.el5.x86_64 from rightscale has depsolving problems
    –> mysql50 conflicts with MySQL-server-community

    I think this is an issue with conflicting repos, similar to a php / php54 problem I experienced earlier..
    executing: yum –disablerep=* –enablerep=HDP-1.0.0.12 install -y … …
    leads to a successful install, I will let you know what happens next.

    Collapse
    #7898

    Interestingly enough it fails on the big yum command which I executed before deployment…

    [root@domU-12-31-39-05-68-41 log]# cat puppet_apply.log | grep err
    Wed Aug 01 00:54:30 -0400 2012 /Stage[1]/Hdp::Pre_install_pkgs/Hdp::Exec[yum install $pre_installed_pkgs]/Exec[yum install $pre_installed_pkgs]/returns (err): change from notrun to 0 failed: yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo zookeeper hbase mysql-server hive mysql-connector-java-5.0.8-1 hive hcatalog oozie.noarch extjs-2.2-1 oozie-client.noarch pig.noarch sqoop mysql-connector-java-5.0.8-1 templeton templeton-tar-pig-0.0.1-1 templeton-tar-hive-0.0.1-1 templeton hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel returned 1 instead of one of [0] at /etc/puppet/agent/modules/hdp/manifests/init.pp:222

    Collapse
    #7897

    Stephen I reproduced this issue on CentOS 5.7 going to try 5.8…

    Collapse
    #7884

    Im not sure but i opened all tcp udp icmp.. I think I will attempt 5.x again let me know how 6.x goes.

    Collapse
    #7883

    I also opened up all tcp ports but that didn’t help. Are there any udp ports needed?

    Collapse
    #7882

    Does your security group not allow a particular port? I used 0.0.0.0/0 for my development cluster.

    Collapse
    #7857

    I am using a 4xlarge instance.

    Yes, I tried preinstalling the packages on the nodes and it still gets stuck in the same place. :-(

    I am now trying CentOS 6.2 but had issues with the ami suggested earlier in that it had no storage space.

    Collapse
    #7854

    Also, I noticed you are using EC2, what size instance are you using? & Did you preinstall the packages on all of your nodes?

    Collapse
    #7850

    Stephen,

    I tried CentOS 5.x for quite some time with only 1 success. However the process has been considerably easier with 6.2 ( i think :P )following this guide from a user on another thread. If your curious http://www.linuxdict.com/2012-06-auto-deploy-hadoop-cluster-with-hdp/

    Here is someone else who is trying it, & I posted a recommendation for an ami on this thread..

    http://hortonworks.com/community/forums/topic/hdp-installation-on-amazon-ec2/

    Collapse
    #7848

    Thanks for the advice Miguel but I tried this and it still does not work.

    It’s stuck on the “Cluster Install” and never gets past this point. very frustrating.

    Collapse
    #7820

    Stephen I just had this same issue yesterday, hang occurred on the hdfs test step & again today on the zookeeper step. I resolved it both times & here is what worked for me:

    first you should associate your fully qualified domain name to your ip in your /etc/hosts file
    and make sure its the same on all your nodes
    ex..
    10.190.111.104 ip-10-190-111-104.ec2.internal Deploy

    secondly, and specifically for the hang issue try uninstalling ( on all nodes )/ reinstalling hmc ( only on your deployment node ) and pre installing all the required packages as mentioned here.

    http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/

    Collapse
    #7816

    Unfortunately trying CentOS 5.8 is not an option at this point. Are there specific problems with the Hortonworks installation with CentOS 5.4 that are fixed in 5.8?

    Any other ideas or advice?

    Collapse
    #7713

    Sasha J
    Moderator

    Hi Stephen,

    is it possible for you to try CentOS 5.8?

    Sasha

    Collapse
    #7712

    I am using CentOS v5.4 HVMx64
    Linux ip-10-17-132-110 2.6.18-274.12.1.el5 #1 SMP Tue Nov 29 13:37:46 EST 2011 x86_64 x86_64 x86_64 GNU/Linux

    iptables (Firewall) is stopped on all servers.

    selinux is disabled. (confirmed in /etc/selinux/config)

    Collapse
    #7710

    Sasha J
    Moderator

    those ports should not be closed by default please post your linux version info, the iptables, and selinux status

    Collapse
    #7706

    Yes, definitely using the internal fqdn’s such as ip-xx-xx-xx-xx.ec2.internal

    Is there a list of ports that need to be open?

    Collapse
    #7704

    Sasha J
    Moderator

    Stephen,

    are you sure you have used the INTERNAL fqdn in your hosts file that you uploaded?

    -Sasha

    Collapse
Viewing 23 replies - 1 through 23 (of 23 total)