Home Forums HDP on Linux – Installation Installation Problems – Puppet agent ping failed

This topic contains 1 reply, has 2 voices, and was last updated by  Sasha J 2 years, 4 months ago.

  • Creator
    Topic
  • #7990

    john pillinger
    Participant

    Puppet agent ping failed:

    Have spent a couple of days on this installation now without any joy. On fresh builds I have tried to follow the installation instructions, but keep coming up with the Puppet Agent Ping Failed problem.

    hosts file is the same across the 4 servers

    10.96.100.221 st-dev-clust1.internal st-dev-clust1
    10.96.100.222 st-dev-clust2.internal st-dev-clust2
    10.96.100.223 st-dev-clust3.internal st-dev-clust3
    10.96.100.224 st-dev-clust4.internal st-dev-clust4
    127.0.0.1 st-dev-clust1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6

    [root@st-dev-clust1 ~]# hostname
    st-dev-clust1.internal
    [root@st-dev-clust1 ~]# hostname -f
    st-dev-clust1.internal

    changing the hostname in /etc/sysconfig/network helped get past the first problem of /puppet/ssl/certs being named incorrectly.

    Any thoughts would be appreciated, I have tried preinstalling the required packages and am not being timed out.

    Logs have been uploaded to the ftp address … file renamed to talktalk.out (I also uploaded yesterday, which can be discarded).

    Regards

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #8039

    Sasha J
    Moderator

    John,
    Now you have another error.
    You did accept default on the directory selection page, right?
    This is known problem, posted here in the forums multiple times, HMC detects mount points incorrectly when your system use LVM configuration.
    as a result, when you accept default, HMC attempting to create directories under device file, which is impossible:
    Wed Aug 08 14:11:01 +0100 2012 /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory_recursive_create[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/returns (err): change from notrun to 0 failed: mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data returned 1 instead of one of [0] at /etc/puppet/agent/modules/hdp/manifests/init.pp:253
    Wed Aug 08 14:11:01 +0100 2012 /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory_recursive_create[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Anchor[hdp::exec::mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data::end] (notice): Dependency Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data] has failures: true
    Wed Aug 08 14:11:01 +0100 2012 /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory_recursive_create[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Anchor[hdp::exec::mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data::end] (warning): Skipping because of failed dependencies
    Wed Aug 08 14:11:01 +0100 2012 /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory_recursive_create[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/File[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data] (notice): Dependency Exec[mkdir -p /dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data] has failures: true
    Wed Aug 08 14:11:01 +0100 2012 /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory_recursive_create[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/Hdp::Directory[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data]/File[/dev/mapper/VolGroup00-LogVol00/hadoop/hdfs/data] (warning): Skipping because of failed dependencies
    PLease, do as I told yesterday to clean up failed installation and restart it from the beginnint.
    Uncheck checkbox on the directory selection page, and put / in the text field.

    Also, couple more things:
    your hosts file contains the following (on all 4 nodes):
    10.96.100.221 st-dev-clust1.internal st-dev-clust1
    10.96.100.222 st-dev-clust2.internal st-dev-clust2
    10.96.100.223 st-dev-clust3.internal st-dev-clust3
    10.96.100.224 st-dev-clust4.internal st-dev-clust4
    127.0.0.1 st-dev-clust4 localhost.localdomain localhost

    As you can see, st-dev-clust4 used in 2 lines. this may be confusing and , in general, incorrect from the system point of view. Please remove this name from the localhost line. It should look like this:
    127.0.0.1 localhost.localdomain localhost

    on all 4 nodes.

    And 1 more thing.
    If you will install HBase, you should change heap size for RegionServer from the calculated number to 1024 Mb (Region server can not start if heap size ifs smaller than 1Gb).

    Please, do those steps and get back to us with the results.

    Thank you!
    Sasha

    Collapse
Viewing 1 replies (of 1 total)