Home Forums HDP on Linux – Installation Installing HDP 1.2

This topic contains 3 replies, has 3 voices, and was last updated by  rajeev kaul 1 year, 9 months ago.

  • Creator
    Topic
  • #13687

    sean mikha
    Participant

    Hi,
    Having some trouble installing HDP 1.2 on CentOS 5 and CentOS 6.

    Everything works up until deploying through Ambari, however afer installation I get multiple failures:
    oozie check execute, hive check execute, and webhcat check execute fail with no log information in stdout or stderror

    Hbase check execute fails as well but includes the following (posted below)
    (please note I have installed hdp 1.1.1.16 and 1.0.1.14/15 on the same Linux distro and same exact node prep so not sure if this is something introduced with 1.2 or a new prep requirement has been added).

    notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/File[/tmp/hbaseSmoke.sh]/ensure: defined content as ‘{md5}a4e08d5388577f1767eb5f8ea8c4a267′
    err: /Stage[2]/Hdp-hbase::Hbase::Service_check/Exec[/tmp/hbaseSmoke.sh]/returns: change from notrun to 0 failed: Command exceeded timeout at /var/lib/ambari-agent/puppet/modules/hdp-hbase/manifests/hbase/service_check.pp:46
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin]: Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true
    warning: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::begin]: Skipping because of failed dependencies
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]: Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true
    warning: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]: Skipping because of failed dependencies
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end]: Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true
    warning: /Stage[2]/Hdp-hbase::Hbase::Service_check/Hdp-hadoop::Exec-hadoop[hbase::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hbase/data/usertable::end]: Skipping because of failed dependencies
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Anchor[hdp-hbase::hbase::service_check::end]: Dependency Exec[/tmp/hbaseSmoke.sh] has failures: true
    warning: /Stage[2]/Hdp-hbase::Hbase::Service_check/Anchor[hdp-hbase::hbase::service_check::end]: Skipping because of failed dependencies
    notice: Finished catalog run in 314.56 seconds

Viewing 3 replies - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #13882

    rajeev kaul
    Participant

    I tried with 3 medium sized nodes, and ran into installation failures. Then switched to a 4 node cluster (1 small for Ambari alone, and 3 large for HDP). I still ran into issues, but was able to figure out from logs that the Postgres port 5432 was not open. Once, I fixed that, I was able to start all the failing services like Hive, Oozie, namenode, jobtracker, etc. Ambari is a much improved installation program over HMC. Do not have to install from scratch if you run into issues with a particular service. Kudos to Hortonworks for making this much needed change.

    I have all the services running fine, except the metrics only show for one of the 3 hosts. The ganglia and nagios services seem to be running fine, so not quite sure why it is not reporting metrics for 2 of the 3 hosts I have configured.

    Collapse
    #13694

    tedr
    Member

    Hi Sean,

    Yup, the size of the instance makes a big difference. The small instance doesn’t have enough memory or drive space to run HDP very well.

    Thanks,
    Ted.

    Collapse
    #13688

    sean mikha
    Participant

    Well… looks like I may have solved my own problem. I had a feeling a lot of the issues may have been around timeouts and the performance of the nodes. I started out on a M1.Small Amazon EC2 instance for 2 nodes.

    I changed this to 4 node , M1.Large and was able to install HDP 1.2.0 on CentOS 6.2 in about 15 minutes.

    I used the rightscale image: ami-043f9c6d

    Details:
    The cluster consists of 4 hosts
    Installed and started services successfully on 4 new hosts
    Master services installed
    NameNode installed on ip-10-85-122-86.ec2.internal
    SecondaryNameNode installed on ip-10-116-243-188.ec2.internal
    JobTracker installed on ip-10-116-243-188.ec2.internal
    Nagios Server installed on ip-10-85-122-86.ec2.internal
    Ganglia Server installed on ip-10-85-122-86.ec2.internal
    Hive Metastore installed on ip-10-116-243-188.ec2.internal
    HBase Master installed on ip-10-85-122-86.ec2.internal
    Oozie Server installed on ip-10-116-243-188.ec2.internal
    All services started
    All tests passed
    Install and start completed in 14 minutes and 47 seconds

    Collapse
Viewing 3 replies - 1 through 3 (of 3 total)