Home Forums HDP on Linux – Installation more "Puppet kick failed" issues

This topic contains 23 replies, has 6 voices, and was last updated by  tedr 1 year, 6 months ago.

  • Creator
    Topic
  • #6506

    Bob Smith
    Member

    I was looking at some other threads that had a similar error message but my logs are a little different than theirs. DNS and reverse DNS are working
    but I’m still running into issues. Here are the logs, any help to get me pointed in the right direction would be nice.

    [2012:06:28 22:44:18][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 23-2-0
    [2012:06:28 22:44:23][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 23-2-0
    [2012:06:28 22:44:28][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 1 out of 1 nodes have reported for txn 23-2-0
    [2012:06:28 22:44:29][INFO][PuppetInvoker][PuppetInvoker.php:216][createGenKickWaitResponse]: Response of genKickWait:
    Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => hdp01-master.west.isilon.com
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:06:28 22:44:29][INFO][Cluster:hdp][Cluster.php:662][_installAllServices]: Persisting puppet report for install HDP
    [2012:06:28 22:44:29][ERROR][Cluster:hdp][Cluster.php:677][_installAllServices]: Puppet kick failed, no successful nodes
    [2012:06:28 22:44:29][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 23-2-0:FAILED: Cluster install:FAILED
    [2012:06:28 22:44:29][INFO][Cluster:hdp][Cluster.php:1039][setState]: hdp – FAILED
    [2012:06:28 22:44:29][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HDFS – FAILED
    [2012:06:28 22:44:29][INFO][Service: HDFS (hdp)][Service.php:130][setState]: HDFS – FAILED dryRun=
    [2012:06:28 22:44:29][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: MAPREDUCE – FAILED
    [2012:06:28 22:44:30][INFO][Service: MAPREDUCE (hdp)][Service.php:130][setState]: MAPREDUCE – FAILED dryRun=
    [2012:06:28 22:44:30][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: DASHBOARD – FAILED
    [2012:06:28 22:44:30][INFO][Service: DASHBOARD (hdp)][Service.php:130][setState]: DASHBOARD – FAILED dryRun=
    [2012:06:28 22:44:30][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: GANGLIA – FAILED
    [2012:06:28 22:44:30][INFO][Service: GANGLIA (hdp)][Service.php:130][setState]: GANGLIA – FAILED dryRun=
    [2012:06:28 22:44:30][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: NAGIOS – FAILED
    [2012:06:28 22:44:30][INFO][Service: NAGIOS (hdp)][Service.php:130][setState]: NAGIOS – FAILED dryRun=
    [2012:06:28 22:44:30][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: MISCELLANEOUS – FAILED
    [2012:06:28 22:44:30][INFO][Service: MISCELLANEOUS (hdp)][Service.php:130][setState]: MISCELLANEOUS – FAILED dryRun=
    [2012:06:28 22:44:30][ERROR][Cluster:hdp][Cluster.php:74][_deployHDP]: Failed to install services.
    [2012:06:28 22:44:30][INFO][ClusterMain:TxnId=23][ClusterMain.php:332][]: Completed action=deploy on cluster=hdp, txn=23-0-0, result=-3, error=Puppet kick failed on all nodes
    [2012:06:28 22:44:30][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1340923470,”context”:{“txnId”:23,”isInPostProcess”:true}}
    [2012:06:28 22:44:30][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1340923470,”context”:{“status”:false,”txnId”:”23″}}
    [2012:06:28 22:44:30][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1340923470,”context”:{“status”:false,”txnId”:”23″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

Viewing 23 replies - 1 through 23 (of 23 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #13295

    tedr
    Member

    Hi Dane,

    Would you do the instructions here:

    http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/

    and we can get a better picture of what is at issue here.

    Thanks,
    Ted.

    Collapse
    #13283

    Dane Li
    Member

    Hi
    I got the same problem \”Puppet kick failed\”.
    hmc.log:
    [2013:01:09 07:03:32][INFO][Cluster:test][Cluster.php:662][_installAllServices]: Persisting puppet report for install HDP
    [2013:01:09 07:03:32][ERROR][Cluster:test][Cluster.php:677][_installAllServices]: Puppet kick failed, no successful nodes
    [2013:01:09 07:03:32][INFO][OrchestratorDB][OrchestratorDB.php:616][persistTransaction]: persist: 3-2-0:FAILED: Cluster install:FAILED
    [2013:01:09 07:03:32][INFO][ClusterMain:TxnId=3][ClusterMain.php:353][]: Completed action=deploy on cluster=test, txn=3-0-0, result=-3, error=Puppet kick failed on all nodes
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {\”state\”:\”DEPLOYMENT_IN_PROGRESS\”,\”displayName\”:\”Deployment in progress\”,\”timeStamp\”:1357715013,\”context\”:{\”txnId\”:3,\”isInPostProcess\”:true}}
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {\”state\”:\”DEPLOYED\”,\”displayName\”:\”Deploy failed\”,\”timeStamp\”:1357715013,\”context\”:{\”status\”:false,\”txnId\”:\”3\”}}
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {\”state\”:\”DEPLOYED\”,\”displayName\”:\”Deploy failed\”,\”timeStamp\”:1357715013,\”context\”:{\”status\”:false,\”txnId\”:\”3\”,\”isInPostProcess\”:false,\”postProcessSuccessful\”:true}}

    so I do
    yum erase puppet ambari
    yum install ambari
    yum install ……….

    Transaction Check Error:
    file /usr/lib/hadoop/bin/task-controller conflicts between attempted installs of hadoop-sbin-1.0.3.14-1.el6.x86_64 and hadoop-sbin-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/Pipes.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/SerialUtils.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/StringUtils.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/TemplateFactory.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/lib64/libhdfs.la conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so.0 conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so.0.0.0 conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhadooppipes.a conflicts between attempted installs of hadoop-pipes-1.0.3.14-1.el6.x86_64 and hadoop-1.1.0-1.x86_64
    file /usr/lib64/libhadooputils.a conflicts between attempted instal

    Collapse
    #13282

    Dane Li
    Member

    Hi
    I got the same problem “Puppet kick failed”.
    hmc.log:
    [2013:01:09 07:03:32][INFO][Cluster:test][Cluster.php:662][_installAllServices]: Persisting puppet report for install HDP
    [2013:01:09 07:03:32][ERROR][Cluster:test][Cluster.php:677][_installAllServices]: Puppet kick failed, no successful nodes
    [2013:01:09 07:03:32][INFO][OrchestratorDB][OrchestratorDB.php:616][persistTransaction]: persist: 3-2-0:FAILED: Cluster install:FAILED
    [2013:01:09 07:03:32][INFO][ClusterMain:TxnId=3][ClusterMain.php:353][]: Completed action=deploy on cluster=test, txn=3-0-0, result=-3, error=Puppet kick failed on all nodes
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1357715013,”context”:{“txnId”:3,”isInPostProcess”:true}}
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1357715013,”context”:{“status”:false,”txnId”:”3″}}
    [2013:01:09 07:03:33][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1357715013,”context”:{“status”:false,”txnId”:”3″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

    so I do
    yum erase puppet ambari
    yum install ambari
    yum install ……….

    Transaction Check Error:
    file /usr/lib/hadoop/bin/task-controller conflicts between attempted installs of hadoop-sbin-1.0.3.14-1.el6.x86_64 and hadoop-sbin-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/Pipes.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/SerialUtils.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/StringUtils.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/include/hadoop/TemplateFactory.hh conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-pipes-1.0.3.14-1.el6.i386
    file /usr/lib64/libhdfs.la conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so.0 conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhdfs.so.0.0.0 conflicts between attempted installs of hadoop-1.1.0-1.x86_64 and hadoop-libhdfs-1.0.3.14-1.el6.x86_64
    file /usr/lib64/libhadooppipes.a conflicts between attempted installs of hadoop-pipes-1.0.3.14-1.el6.x86_64 and hadoop-1.1.0-1.x86_64
    file /usr/lib64/libhadooputils.a conflicts between attempted installs of hadoop-pipes-1.0.3.14-1.el6.x86_64 and hadoop-1.1.0-1

    Collapse
    #6982

    Things seems to have successfully deployed after the re-try. All of the deployment tasks finished, I restarted hmc and the cluster management page appeared as expected. Of course everything is running on the single CentOS 5.8 VM.

    So for me, the two critical issues were the need to get the right mount point — do NOT accept the default — and overcoming my slow internet access.

    The full yum pre-install command might be this if you elect to deploy everything in Hortonworks:

    yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo zookeeper hbase mysql-server hive mysql-connector-java-5.0.8-1 hive hcatalog oozie.noarch extjs-2.2-1 oozie-client.noarch pig.noarch sqoop mysql-connector-java-5.0.8-1 templeton templeton-tar-pig-0.0.1-1 templeton-tar-hive-0.0.1-1 templeton hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel

    I think there should be special note of these conditions on the Hortonworks download page.

    Collapse
    #6978

    Sasha J
    Moderator

    James,
    sometimes, timeout happens on break-point, when all packages are installed, but hmc decided that command timed out…
    There is no reasons to do yum erase in such case, juts reinstall hmc and start from the beginning, it should work fine.

    Thank you!
    Sasha

    Collapse
    #6977

    OK, I did this and things got farther. BUT, I ran into the timeout error mentioned earlier in this discussion. I ran the big yum command to pre-install things but yum told me that all of these were already installed and were the latest versions!

    But I did a yum erase and install of hmc and will give it a go again so see if perhaps the failed installation did manage to grab most of what is needed and so less time will be wasted in the 5 minute limit.

    Collapse
    #6975

    Sasha J
    Moderator

    James,
    give 777 to /hdp, installer will create a bunch of underlying directories with the needed ownership and permissions.

    Sasha

    Collapse
    #6972

    I think I may have run into this a week ago when I was trying a single host install. The management console pre-selects some wierd device-based access point that I presume is incorrect. If I, as root, create directory like /hdp, what permissions do I need to give it?

    Collapse
    #6956

    Sasha J
    Moderator

    Wang,
    here is your problem:
    root/hadoop/hdfs/namenode]/returns (err): change from notrun to 0 failed: mkdir -p /dev/mapper/domuvg-root/hadoop/hdfs/namenode returned 1 instead of one of [0] at /etc/puppet/agent/modules/hdp/manifests/init.pp:222\””,

    You trying to create directories under the device files (/dev/mapper/xxx).
    You should use mount pounts instead (like / or /hdp or something like this).
    Run deinstall cluster, then install it again and on the select directories page, deselect all the /dev/mapper lines and put the list of valid mount points into the text box.

    Thank you!
    Sasha

    Collapse
    #6908

    Sasha J
    Moderator

    Bob,
    there are many people hit this timeout.
    The thing is, especially in AWS, that default timeout is 5 minutes but “yum install” part takes longer (depending on the number of packages installed and network speed (all packages should be downloaded).
    Pre-intsalling packages eliminate need to download and install it again.
    Or, setting up local repository also can give you better speed.
    Engineering aware of this problem and will put different timeouts logic in the next release.

    Thank you!
    Sasha

    Collapse
    #6907

    Bob Smith
    Member

    went to lunch. Yes everything looks like it is up and running. Thanks.
    Strange that there is a timeout issue and other people are not hitting it.

    Collapse
    #6906

    Sasha J
    Moderator

    Bob,
    any update?
    were you able to install it?

    Thank you!
    Sasha

    Collapse
    #6904

    Sasha J
    Moderator

    Bob,
    here is the problem:

    Mon Jul 09 10:06:36 -0700 2012 Exec[yum install $pre_installed_pkgs](provider=posix) (debug): Executing ‘yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel’
    Mon Jul 09 10:06:36 -0700 2012 Puppet (debug): Executing ‘yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel’
    Mon Jul 09 10:11:36 -0700 2012 /Stage[1]/Hdp::Pre_install_pkgs/Hdp::Exec[yum install $pre_installed_pkgs]/Exec[yum install $pre_installed_pkgs]/returns (err): change from notrun to 0 failed: Command exceeded timeout at /etc/puppet/agent/modules/hdp/manifests/init.pp:222
    Mon Jul 09 10:11:36 -0700 2012 /Stage[1]/Hdp::Pre_install_pkgs/Hdp::Exec[yum install $pre_installed_pkgs]/Anchor[hdp::exec::yum install $pre_installed_pkgs::end] (notice): Dependency Exec[yum install $pre_installed_pkgs] has failures: true
    Mon Jul 09 10:11:36 -0700 2012 /Stage[1]/Hdp::Pre_install_pkgs/Hdp::Exec[yum install $pre_installed_pkgs]/Anchor[hdp::exec::yum install $pre_installed_pkgs::end] (warning): Skipping because of failed dependencies

    This means that command “yum install …” take too long to complete and time exceeds default puppet timeout.
    There is no way to change timeout in current release, this functionality will be added in next release.
    So far, there is a workaround for this:
    You should preinstall all packages from the terminal, then run HMC installation again. It will skip installation part (as all packages will be already installed), and go directly to configuring part.

    so, please do the following:

    1. yum erase hmc puppet
    2. yum install hmc
    3. yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel

    This command will install all needed packages.

    4. service hmc start
    5. connect to HMC by browser and run “normal” installation.

    This way you should be able to complete installation successfully.

    Thank you!

    Sasha

    Collapse
    #6899

    Sasha J
    Moderator

    Bob,
    log you posted does not show any installation attempt…
    Let us run installation from scratch.

    please, do the following on the system:

    1. yum erase hmc puppet
    2. remove all puppet and hmc log files which may remains on the system.
    3. yum install hmc (this will install and configure puppet as dependency).
    4. service hmc start
    5. Connect by browser and run installation.

    When it completed (failed), post hmc and puppet logs.

    Thank you!
    Sasha

    Collapse
    #6874

    Sasha J
    Moderator

    Bob,
    still need to see your puppet_apply.log…
    Please, post it here.

    Thank you!
    Sasha

    Collapse
    #6867

    Bob Smith
    Member

    umm I did that already? was there some information missing that would help? anyways here we go again.

    [root@hdp01-master yum.repos.d]# ls /etc/yum.repos.d/
    CentOS-Base.repo CentOS-Media.repo epel.repo epel-testing.repo hdp.repo mirrors-rpmforge rpmforge.repo
    [root@hdp01-master yum.repos.d]# uname -a
    Linux hdp01-master. 2.6.18-194.32.1.el5 #1 SMP Wed Jan 5 17:52:25 EST 2011 x86_64 x86_64 x86_64 GNU/Linux

    Collapse
    #6854

    Sasha J
    Moderator

    Bob,
    Please post your os, version, and the repositories in your /etc/yum.repos.d/

    thanks

    Sasha

    Collapse
    #6849

    Bob Smith
    Member

    I have similar log messages as Kenneth.

    [root@hdp01-master yum.repos.d]# ls /etc/yum.repos.d/
    CentOS-Base.repo CentOS-Media.repo epel.repo epel-testing.repo hdp.repo mirrors-rpmforge rpmforge.repo

    [root@hdp01-master ~]# egrep -B 5 -A 5 “Failed” /var/log/puppet_apply.log
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-hcat/manifests/params.pp’ in environment production
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-mysql/manifests/init.pp’ in environment production
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-mysql/manifests/server.pp’ in environment production
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-mysql/manifests/params.pp’ in environment production
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-monitor-webserver/manifests/init.pp’ in environment production
    Thu Jun 28 15:44:55 -0700 2012 Puppet (debug): Failed to load library ‘ldap’ for feature ‘ldap’
    Thu Jun 28 15:44:55 -0700 2012 Puppet (warning): Dynamic lookup of $service_state at /etc/puppet/agent/modules/hdp/manifests/init.pp:55 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Thu Jun 28 15:44:55 -0700 2012 Puppet (warning): Dynamic lookup of $service_state at /etc/puppet/agent/modules/hdp/manifests/init.pp:59 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Thu Jun 28 15:44:55 -0700 2012 Puppet (warning): Dynamic lookup of $pre_installed_pkgs at /etc/puppet/agent/modules/hdp/manifests/init.pp:61 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Thu Jun 28 15:44:56 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-hadoop/manifests/hdfs/directory.pp’ in environment production
    Thu Jun 28 15:44:56 -0700 2012 Puppet (debug): Automatically imported hdp-hadoop::hdfs::directory from hdp-hadoop/hdfs/directory into production

    Thu Jun 28 15:44:58 -0700 2012 Puppet (warning): Dynamic lookup of $service_state at /etc/puppet/agent/modules/hdp-hadoop/manifests/service.pp:40 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Thu Jun 28 15:44:58 -0700 2012 Puppet (warning): Dynamic lookup of $wipeoff_data at /etc/puppet/agent/modules/hdp/manifests/init.pp:135 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Thu Jun 28 15:44:59 -0700 2012 Scope(Hdp::Configfile[/etc/snmp//snmpd.conf]) (debug): Retrieving template hdp/snmpd.conf.erb
    Thu Jun 28 15:44:59 -0700 2012 template[/etc/puppet/agent/modules/hdp/templates/snmpd.conf.erb] (debug): Bound template variables for /etc/puppet/agent/modules/hdp/templates/snmpd.conf.erb in 0.00 seconds
    Thu Jun 28 15:44:59 -0700 2012 template[/etc/puppet/agent/modules/hdp/templates/snmpd.conf.erb] (debug): Interpolated template /etc/puppet/agent/modules/hdp/templates/snmpd.conf.erb in 0.01 seconds
    Thu Jun 28 15:44:59 -0700 2012 Puppet (debug): Failed to load library ‘selinux’ for feature ‘selinux’
    Thu Jun 28 15:44:59 -0700 2012 Puppet::Type::Package::ProviderRpm (debug): Executing ‘/bin/rpm –version’
    Thu Jun 28 15:44:59 -0700 2012 Puppet::Type::Package::ProviderYum (debug): Executing ‘/bin/rpm –version’
    Thu Jun 28 15:44:59 -0700 2012 Puppet::Type::Package::ProviderUrpmi (debug): Executing ‘/bin/rpm -ql rpm’
    Thu Jun 28 15:44:59 -0700 2012 Puppet::Type::Package::ProviderAptrpm (debug): Executing ‘/bin/rpm -ql rpm’
    Thu Jun 28 15:45:01 -0700 2012 Puppet (debug): Adding relationship from Stage[1] to Stage[2] with ‘before’

    Collapse
    #6755

    Sasha J
    Moderator

    Hi Kenneth,

    Please post your os, version, and the repositories in your /etc/yum.repos.d/

    thanks

    Sasha

    Collapse
    #6754

    kenneth ho
    Member

    I am having similar issue and when looking into the puppet_apply.log, i see the following errors:

    Fri Jul 06 15:12:12 -0700 2012 Puppet (debug): Failed to load library ‘ldap’ for feature ‘ldap’
    Fri Jul 06 15:12:12 -0700 2012 Puppet (warning): Dynamic lookup of $service_state at /etc/puppet/agent/modules/hdp/manifests/init.pp:55 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:12 -0700 2012 Puppet (warning): Dynamic lookup of $pre_installed_pkgs at /etc/puppet/agent/modules/hdp/manifests/init.pp:57 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:12 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp/manifests/snappy/package.pp’ in environment production
    Fri Jul 06 15:12:12 -0700 2012 Puppet (debug): Automatically imported hdp::snappy::package from hdp/snappy/package into production
    Fri Jul 06 15:12:13 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-hadoop/manifests/hdfs/directory.pp’ in environment production
    Fri Jul 06 15:12:13 -0700 2012 Puppet (debug): Automatically imported hdp-hadoop::hdfs::directory from hdp-hadoop/hdfs/directory into production
    Fri Jul 06 15:12:14 -0700 2012 Puppet (debug): Failed to load library ‘selinux’ for feature ‘selinux’
    Fri Jul 06 15:12:14 -0700 2012 Puppet (warning): Dynamic lookup of $artifact_dir at /etc/puppet/agent/modules/hdp-hive/manifests/mysql-connector.pp:15 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:14 -0700 2012 Puppet (warning): Dynamic lookup of $artifact_dir at /etc/puppet/agent/modules/hdp-oozie/manifests/download-ext-zip.pp:15 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:14 -0700 2012 Puppet (warning): Dynamic lookup of $artifact_dir at /etc/puppet/agent/modules/hdp-sqoop/manifests/mysql-connector.pp:19 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:14 -0700 2012 Puppet (warning): Dynamic lookup of $artifact_dir at /etc/puppet/agent/modules/hdp-templeton/manifests/download-hive-tar.pp:16 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:14 -0700 2012 Puppet (warning): Dynamic lookup of $artifact_dir at /etc/puppet/agent/modules/hdp-templeton/manifests/download-pig-tar.pp:16 is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes.
    Fri Jul 06 15:12:14 -0700 2012 Puppet (debug): importing ‘/etc/puppet/agent/modules/hdp-hadoop/manifests/hdfs/copyfromlocal.pp’ in environment production

    Collapse
    #6588

    Sasha J
    Moderator

    HI Bob,

    were you able to get your cluster running? If not, could you post some more details about your cluster?

    Thanks,

    Sasha

    Collapse
    #6508

    Bob Smith
    Member

    I’m running a single CentOS 5.5 vm and trying to install just the HDFS and dependencies(no hbase,hive,zooker,etc). Looking through the puppet_apply.log file nothing is jumping out at me in the way of a failure. There is lots of entries in that file which might make it hard to post here. Is there anything in particular to look for?

    Collapse
    #6507

    Sasha J
    Moderator

    Could you send file /var/log/puppet_apply.log?
    Also, could you a bit more definitive about your environment (how many nodes, real hardware/VMs, etc)…

    Collapse
Viewing 23 replies - 1 through 23 (of 23 total)