Home Forums HDP on Linux – Installation HMC Install Puppet Agent Ping Failed

This topic contains 40 replies, has 9 voices, and was last updated by  Larry Liu 2 years ago.

  • Creator
    Topic
  • #9277

    Fadi Yousuf
    Member

    Hi all,

    I am trying to install HDP using HMC on CentOS 6.3 on 5 nodes, and I am getting the following error on 4 of the nodes 5 nodes:
    [badHealthReason] => Puppet agent ping failed: , error=111, outputLogs=Puppet agent ping failed: [Connection refused]

    – I have configured FQDN of all nodes in /etc/hosts of each node
    – Linux arch is as follows: Linux namenode 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
    – SELinux and iptables are disabled
    – ntpd is running

    I have 5 nodes: namenode, node1, node2, node3, node4
    I am running HMC on the namenode

    For namenode: Ping to puppet agent succeeded for host

    Please can you help identify the issue.

    THanks

Viewing 30 replies - 1 through 30 (of 40 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #12873

    Larry Liu
    Moderator

    Hi, Gurfan

    please use information from this post to collect some information:

    http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/

    Thanks

    Larry

    Collapse
    #12866

    Gurfan Khan
    Member

    Thanks Larry for your reply,

    Sorry for the confusion I mean to say the hostname only. I provided Hostname only in Hostdetail.txt.

    I provided the entry which returns hostname -f
    impetus-n164.impetus.co.in

    Thanks
    Thanks,
    -Gurfan

    Collapse
    #12849

    Larry Liu
    Moderator

    Hi, Gurfan,

    I noticed you were saying “Currently Hostdetail.txt conatins only one IP, which is the new machine going to add in cluster.”

    Can you please use hostname in Hostdetail.txt instead?

    Please let me know if this fixes the issue.

    Thanks

    Larry

    Collapse
    #12848

    Gurfan Khan
    Member

    Thanks Ted for your quick reply.

    Now I am facing issue while adding a new Node from UI (HMC dashboard). I am providing below detials-

    1) IP of the new machine which we are going to add in cluster.
    2) Private key of the machine on which HMC Server exists.

    Currently Hostdetail.txt conatins only one IP, which is the new machine going to add in cluster.

    Problem Statement:

    After providing the details(Private key, Hostdetail.txt) clicked on add Node what we observed that it gets hanged for long time.
    Even we left it waiting for whole night, In morning we found in the same state.

    Log from hmc.log

    [2012:12:12 06:39:04][INFO][UploadFiles][addNodes.php:56][]: Cluster Name: highwire Cleanup required? and type: boolean
    [2012:12:12 06:39:04][INFO][UploadFiles][addNodes.php:104][]: Doing a fresh install:

    Checked more log files(puppet_agent.log, puppet_apply.log, puppet_master.log) but could not found any new entry.

    Please guide us….

    Thanks,
    -Gurfan

    Collapse
    #12726

    tedr
    Member

    Hi Gurfan,

    Thanks for using HDP.

    If you are going to start and stop services manually with the commands that Sasha gave you earlier, you can make the changes in /etc/hadoop/mapred-site.xml

    I hope this helps,
    Ted.

    Collapse
    #12725

    Gurfan Khan
    Member

    Thanks Sasha for your reply..

    We have installed HDP on two machines successfully.

    Now we want to change the slot for mapper and reducer. Path on HDP UI:

    Cluster Management>> Manage Services>> MapReduce>> Reconfigure(Symbol) >> (Number of Map slots, Number of Reduce slots). >> Apply Changes.

    After Apply Changes it stops/start all the services again.

    We made changes from UI, it picks the updated one.

    We wanted to know the Path(location) of the file in which we make changes and it picks the updated one.

    In this case we only have to start the specific service.

    Thanks,
    -Gurfan

    Collapse
    #12607

    Sasha J
    Moderator

    James,
    thank you for trying HDP2, but this is the wrong thread to post HDP2 questions.
    Please, use relevant thread.

    Thank you!
    Sasha

    Collapse
    #12597

    Chia-Hao Chang
    Participant

    Hi Sasha J,
    I try to install HDP2 via HMC in pseudo distribution mode but it always failed, I have try the following method you provided but still in vain.
    yum -y erase hmc puppet
    yum -y install hmc
    service hmc start
    Finally, I use the check.sh mentioned in “http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/ “and generated the report.

    I have uploaded the following log files to hortonworks’ FTP server
    -rw-r–r– 1 root root 9717 2012-12-07 16:05 check.sh_James.log
    -rw-r–r– 1 root root 452274 2012-12-07 16:04 hmc_James.log
    -rw-r—– 1 root root 14661 2012-12-07 16:03 puppet_agent_http_James.log
    -rw-r–r– 1 root root 4182400 2012-12-07 16:03 puppet_agent_James.log
    -rw-r–r– 1 root root 744779 2012-12-07 16:03 puppet_apply_James.log
    -rw-r–r– 1 root root 17110 2012-12-07 16:03 puppet_master_James.log
    Could you please give me a hand?

    Best Regards.
    James Chang

    Collapse
    #12246

    Sasha J
    Moderator

    Gurfan,
    any service in the cluster coupld be started and stopped manually.
    Here is the list of the commands to start services:

    su – hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode”

    su – hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start datanode”

    su – hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start secondarynamenode”

    su – mapred -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start jobtracker”

    su – mapred -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start historyserver”

    su mapred -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start tasktracker”

    su – zookeeper -c ‘source /etc/zookeeper/conf/zookeeper-env.sh ; /bin/env ZOOCFGDIR=/etc/zookeeper/conf ZOOCFG=zoo.cfg /usr/lib/zookeeper/bin/zkServer.sh start’

    su – hbase -c “/usr/lib/hbase/bin/hbase-daemon.sh –config /etc/hbase/conf start master”

    su – hbase -c “/usr/lib/hbase/bin/hbase-daemon.sh –config /etc/hbase/conf start regionserver”

    /etc/init.d/mysqld start
    su – hive -c ‘env HADOOP_HOME=/usr nohup hive –service metastore > /var/log/hive/hive.out 2> /var/log/hive/hive.log & ‘

    su – templeton -c ‘/usr/sbin/templeton_server.sh start’

    su – oozie -c “cd /var/log/oozie; /usr/lib/oozie/bin/oozie-start.sh”

    Hope this helps.

    Thank you!
    Sasha

    Collapse
    #12240

    Gurfan Khan
    Member

    Hi Ted.

    It will be great if you please give me a start that how we can manually start particular service like Hadoop, Oozie.

    Thanks for your effort.

    Regards,
    -Gurfan

    Collapse
    #12219

    Gurfan Khan
    Member

    Thanks for the reply ted.

    Once I will go through suggested steps I will update you.

    Thanks
    Gurfan

    Collapse
    #12178

    tedr
    Member

    Gurfan,

    Yes you are headed down the correct path. However if you modify the config files you will not be able to use HMC to start or stop you cluster. If you do use HMC to start/stop your cluster the changes you make to the config files will be overwritten. You can work around this by writing a script that makes these changes and restarts Oozie. You will need to run this script manually after HMC has done it’s launch.

    Ted.

    Collapse
    #12171

    Gurfan Khan
    Member

    Thanks for the reply Ted.
    Happy to share you all that I have successfully installed HDP at my dev environment.

    We have experiencing HDP from last few days. Again we want your valuable input.

    Now Oozie is using derby database, Our requirement is little bit different we want to point MySql for Oozie.

    What I am Thinking that we have to change the xml files manually(for instance: oozie-site.xml). Adding Mysql connector jar into oozie lib directory.(oozie/oozie-server/webapps/oozie/WEB-INF/lib)

    Am I thinking in right direction or is any better way to do that?

    Environment Detail:
    Centos 6.2
    HDP 1.1

    Appreciate your quick reply.

    Thanks,
    -Gurfan

    Collapse
    #11638

    tedr
    Member

    Gurfan,

    You can manually download and install the jdk’s and then tell the HMC installer where they are. But the main thing you want to do is make sure that SElinux is disabled on ALL nodes that you are using.

    Ted.

    Collapse
    #11634

    Gurfan Khan
    Member

    Hi Robert,

    Thanks for the reply. It seems the application is not able to download the file from following URL:
    curl -f –retry 10 http://download.oracle.com/otn-pub/java/jdk/6u31-b03/jdk-6u31-linux-x64.bin -o /tmp/HDP-artifacts//jdk-6u31-linux-x64.bin

    Can we manually download and copy the file “/jdk-6u31-linux-x64.bin” into the “/tmp/HDP-artifacts” directory and then restart the installation again?

    Regards,
    –Gurfan

    Collapse
    #11630

    Robert
    Participant

    Hi Gurfan,
    I saw the following error:
    Thu Oct 25 11:59:34 +0530 2012 /Stage[10]/Hdp-mysql::Server/Hdp::Package[mysql]/Hdp::Package::Yum[mysql]/Hdp::Java::Package[mysql]/Exec[mkdir -p /usr/jdk32 ; chmod +x /tmp/HDP-artifacts//jdk-6u31-linux-i586.bin; cd /usr/jdk32 ; echo A | /tmp/HDP-artifacts//jdk-6u31-linux-i586.bin -noregister > /dev/null 2>&1 mysql]/returns (err): change from notrun to 0 failed: chmod: cannot access `/tmp/HDP-artifacts//jdk-6u31-linux-i586.bin’: No such file or directory

    It seems that either the jdk is not being written to tmp, or the file is there but permissions on the file are incorrect. Can you try it one more time, but also make sure to check sestatus on both machines and make sure its off along with checking iptables. If still getting an error, I can setup a webex so I can take a closer look.

    -Robert

    Collapse
    #11612

    tedr
    Member

    Gurfan,

    Thanks for uploading the logs. We are looking at them and will get back to you again when we have fully analysed them, but on a cursory examination it looks like it may be failing on trying to download either java or mysql from the Oracle site. You can try to download and install these, and then run the HMC installer.

    Ted.

    Collapse
    #11596

    Gurfan Khan
    Member

    Hi Shasha J,

    I have tried to install the installation as you suggested up. I formatted my master and node machine and started with the suggested Steps.

    We got succeded on Pig installation but it again Got failed at Hive/HCatalog start.

    Can you please have a look into it.

    I have uploaded a zip file on hortonworks ftp support-
    – Hive_ErrorLogfiles.zip
    – Master puppet log files.
    – Node puppet log files
    – hmc.log
    – Deployment log

    Waiting for your valuable input.

    Thanks,
    -Gurfan

    Collapse
    #10957

    Sasha J
    Moderator

    Gurfan,
    I suggest you to wipe out your system and start from scratch.
    There are too many attempts already, which leave system at a total mess…
    Please, follow procedure outlined in the following post:

    http://hortonworks.com/community/forums/topic/installing-hdp-failed-with-all-kicks-failed/

    This written by person who did the installation from the beginning till the end without having specific Linux knowledge.

    Thank you!
    Sasha

    Collapse
    #10903

    Gurfan Khan
    Member

    Hi Shasha J,

    First of all thanks for your input.

    As you suggested I have verified that all the below process is running fine using JPS and also acessed from web browser(Name Node, Job Tracker).
    a) HDFS
    b) MapReduce
    c) HBase
    d) Zookeeper

    While in the process of Pig Installation I am getting below error. Parallely I have uploaded the log file on Hortonworks ftp support named hmc.log:

    [2012:10:11 15:13:55][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-78-48:FAILED:Pig test:FAILED
    [2012:10:11 15:13:55][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: PIG – FAILED
    [2012:10:11 15:13:55][INFO][Service: PIG (HighWire)][Service.php:130][setState]: PIG – FAILED dryRun=
    [2012:10:11 15:13:55][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-78-48:FAILED:Pig test:FAILED
    [2012:10:11 15:13:55][INFO][Cluster:HighWire][Cluster.php:810][startService]: Starting service PIG complete. Result=-2
    [2012:10:11 15:13:55][INFO][ClusterMain:TxnId=3][ClusterMain.php:353][]: Completed action=deploy on cluster=HighWire, txn=3-0-0, result=-2, error=Service PIG is not STARTED, smoke tests failed!

    On the second time I have just skipped the pig installation and started with Hive/HCatalog but in this case also I am getting error on MySQL. I have checked that the Mysql rpm installed.

    yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hive hcatalog oozie-client.noarch oozie.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0-99 ganglia-gmond-3.2.0-99 gweb-2.2.0-99 rrdtool-1.4.5 hdp_mon_ganglia_addons snappy snappy-devel zookeeper hbase mysql-connector-java-5.0.8-1 sqoop pig pig.noarch mysql-server oozie.noarch

    Waiting for your valuable input.

    If you suggest we can come on web conference call.

    Thanks,
    -Gurfan

    Collapse
    #10825

    Robert
    Participant

    Hi Gurfan,
    Can you verify during the hbase test process that the Hmaster and HRegionServer is running ? You can execute jps within the shell to verify those processes are coming up. If any of those processes are not up, the test will fail.

    Collapse
    #10811

    Gurfan Khan
    Member

    Shasha,
    The answer is “Yes”, I verified that and fixed. I am creating my hostdetail.txt file on Centos 6.2 only. About the SELinux running enforcing mode, it is disabled. I verified that into the log file generated by executing the script “check.sh” provided by Hortonworks.

    previously I was getting the error(bootstrap) on first stage where machines are checked for proper connectivity with ssh.

    Now I am getting the error while deployment. many components(HAdoop, Zookeeper etc) installed sucessfully even HBase installation is succeeded but while Hbase test. It got failed.

    I am getting the error at Deployment(HBase Test) stage.

    “\”Wed Oct 10 11:56:40 +0530 2012 Puppet (debug): Executing ‘test -e /opt/highwire/jdk/jdk1.6.0_33/bin/java’\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Service[snmpd](provider=redhat) (debug): Executing ‘/sbin/service snmpd status’\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Exec[/tmp/hbaseSmoke.sh]/returns (debug): Exec try 1/3\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Exec[/tmp/hbaseSmoke.sh](provider=posix) (debug): Executing ‘su – ambari_qa -c ‘hbase –config /etc/hbase/conf/ shell /tmp/hbaseSmoke.sh”\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Puppet (debug): Executing ‘su – ambari_qa -c ‘hbase –config /etc/hbase/conf/ shell /tmp/hbaseSmoke.sh”\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Exec[/tmp/hbaseSmoke.sh]/returns (err): change from notrun to 0 failed: Command exceeded timeout at /etc/puppet/agent/modules/hdp-hbase/manifests/hbase/service_check.pp:46\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin] (warning): Skipping because of failed dependencies\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable] (warning): Skipping because of failed dependencies\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end] (warning): Skipping because of failed

    Please suggest some thing parallely I am trying to resolve it at my side.

    Appreciate your quick response.

    Thanks,
    -Gurfan

    Collapse
    #10778

    Sasha J
    Moderator

    Gurfan,
    Can you clarify if you already made the change and are still getting the error, or are you still verifying the suggestions made in the referenced post?

    Collapse
    #10775

    Gurfan Khan
    Member

    Thanks Shasha J,

    Yeah you are right the packet is loosing while installing the HMC.

    We moved ahead with installation but again we got stucked in Hbase Test . Captured Error from UI Deployment Log:

    “\”Wed Oct 10 11:56:40 +0530 2012 Puppet (debug): Executing ‘test -e /opt/highwire/jdk/jdk1.6.0_33/bin/java’\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Service[snmpd](provider=redhat) (debug): Executing ‘/sbin/service snmpd status’\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Exec[/tmp/hbaseSmoke.sh]/returns (debug): Exec try 1/3\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Exec[/tmp/hbaseSmoke.sh](provider=posix) (debug): Executing ‘su – ambari_qa -c ‘hbase –config /etc/hbase/conf/ shell /tmp/hbaseSmoke.sh”\””,
    “\”Wed Oct 10 11:56:40 +0530 2012 Puppet (debug): Executing ‘su – ambari_qa -c ‘hbase –config /etc/hbase/conf/ shell /tmp/hbaseSmoke.sh”\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Exec[/tmp/hbaseSmoke.sh]/returns (err): change from notrun to 0 failed: Command exceeded timeout at /etc/puppet/agent/modules/hdp-hbase/manifests/hbase/service_check.pp:46\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin] (warning): Skipping because of failed dependencies\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable] (warning): Skipping because of failed dependencies\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end] (notice): Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true\””,
    “\”Wed Oct 10 12:01:40 +0530 2012 /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end] (warning): Skipping because of failed

    I have followed your below suggestion:

    http://hortonworks.com/community/forums/topic/hbase-test-failed-when-deployment/

    Just going to check by increasing the Region Server heap size to 1024.

    Thanks.
    -Gurfan

    Collapse
    #10674

    Sasha J
    Moderator

    Gurfan,
    you have some kind of the connectivity issues, puppet master can not communicate with puppet agents:
    [badHealthReason] => Puppet agent ping failed: , error=111, outputLogs=Puppet agent ping failed: [Connection refused]
    [badHealthReason] => Puppet agent ping failed: , error=111, outputLogs=Puppet agent ping failed: [Connection refused]
    [badHealthReason] => Puppet agent ping failed: , error=111, outputLogs=Puppet agent ping failed: [Connection refused]

    From other hand, you have SELinux running enforcing mode on 2 out of 4 hosts.
    Also, did you create your host detail.txt file on Windows? if yes, then it have extra characters in the end of lines.
    Look deeper for all this and fix all the issues before trying to install again.

    Collapse
    #10670

    Gurfan Khan
    Member

    Thanks Shasha J,

    I am just giving the hostname in Hostdetail.txt inspite of FQDN. hostname.domainname.

    But While Deploying the cluster I am getting error on HDFS start .

    Please provide some insight on it.

    Attaching log file by the Name HadoopError.log

    Thanks.
    Gurfan

    Collapse
    #10660

    Sasha J
    Moderator

    Gufran,
    thank you for the uploading file for our attention!
    We will look at it at our earliest convenience and reply to you!

    Collapse
    #10659

    Gurfan Khan
    Member

    Hi Sasha J,

    Attaching the log file with suggested changes. Please have a look.

    Appreciate your quick reply.

    Log file name: puppet_issue.log
    Uploaded on Ftp.

    Thanks,
    -Gurfan

    Collapse
    #10626

    Sasha J
    Moderator

    Gurfan,
    file you uploaded is unreadable, please, rename it to something shorter and re-upload.
    As of the problem, make sure you have all the pre-requisites met before starting installation.

    Collapse
    #10604

    Gurfan Khan
    Member

    Hi Sasha,

    While installing Horton Cluster Setup I am getting some thing the same error. Below is the error.

    [2012:10:05 22:59:51][INFO][PuppetFinalize:txnId=1:subTxnId=104][finalizeNodes.php:390][]: Puppet finalize, succeeded for 1 and failed for 3 of total 4 hosts

    I tried the following suggestion, but it just did not work.

    yum -y erase hmc puppet
    yum -y install hmc
    service hmc start

    Even below Link:

    http://hortonworks.com/community/forums/topic/problem-on-step-add-nodes/

    I have Uploaded the log file into hortonworks ftp following the link (http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/). Sorry File name is too large –
    HWCluster HWCluster HWCluster HWCluster.impetus-d590 impetus-n067 impetus-n164 impetus-n152.192.168.160.20 192.168.160.27192.168.213.20 192.168.213.21.out

    Please provide your input on it.

    Thanks
    -Gurfan

    Collapse
Viewing 30 replies - 1 through 30 (of 40 total)