Services failure in Ambari web UI with HDP2.0

Tagged: 

This topic contains 4 replies, has 2 voices, and was last updated by  Robert Molina 10 months ago.

  • Creator
    Topic
  • #46224

    Dharanikumar Bodla
    Participant

    hi to all,
    new to ambari hadoop.Installation of ambari Hdp2.0 is done well on centos 6.under etc/hosts master.hadoop 192.168.2.120,slave.hadoop-192.168.2.121 is done.
    ambari-agent is installed on slave.hadoop,i have only problem with services like HDFS,MAPREDUCE2 and HBASE.
    the following services with errors.
    HBASE:
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Anchor[hdp-hbase::hbase::service_check::end]: Dependency Exec[/tmp/hbase-smoke.sh] has failures: true
    notice: Finished catalog run in 301.72 seconds
    MAPREDUCE2:
    notice: /Stage[2]/Hdp-yarn::Mapred2::Service_check/Anchor[hdp-yarn::mapred2::service_check::end]: Dependency Exec[hadoop –config /etc/hadoop/conf fs -rm -r -f /user/ambari-qa/mapredsmokeoutput /user/ambari-qa/mapredsmokeinput] has failures: true
    notice: Finished catalog run in 3.65 seconds
    HDFS:
    notice: /Stage[2]/Hdp-hbase::Hbase::Service_check/Anchor[hdp-hbase::hbase::service_check::end]: Dependency Exec[/tmp/hbase-smoke.sh] has failures: true
    notice: Finished catalog run in 301.72 seconds
    please find the errors listed above.
    I had another issue how to run a map/reduce program in yarn environment with example in detail.

    thanks&regards,
    dharani kumar bodla

Viewing 4 replies - 1 through 4 (of 4 total)

You must be to reply to this topic. | Create Account

  • Author
    Replies
  • #50926

    Robert Molina
    Moderator

    Hi Dharani Kumar Bodla,
    The error, indicates that maybe some of your data mounts are not accessible on your datanodes. I would log into each box and do a df -h to see if you are seeing any space limitations or missing mounts.

    Regards,
    Robert

    Collapse
    #46494

    Dharanikumar Bodla
    Participant

    hi to all ,
    these are errors getting under nagios ,even the hosts are up running .
    h.hadoop DATANODE::DataNode space CRITICAL 01-08-2014 19:25:17 0d 3h 13m 26s 2/2 CRITICAL: Data inaccessible, Status code = 200
    s.hadoop HDFS::Percent DataNodes with space available CRITICAL 01-08-2014 19:27:24 0d 3h 12m 11s 1/1 CRITICAL: total:<2>, affected:<1>
    and HDFS with Percent DataNodes with space available CRITICAL: total:<2>, affected:<1> what is meant by this and how to resolve it.

    thanks & regards,
    Dharani Kumar Bodla.

    Collapse
    #46440

    Dharanikumar Bodla
    Participant

    hi Robert ,
    Thanks for the information.
    I had another doubt ,can I run multiple map/reduce job at a time from a single slave.If possible how do I run.
    At present I can able to run from two terminals ,but jobs are running one after the other ,I need simultaneously run of jobs.

    thanks & regards,
    Dharani Kumar bodla.

    Collapse
    #46365

    Robert Molina
    Moderator

    Hi Dharanikumar ,
    The smoke.tests that seem to be failing are for hbase, mapreduce only. The HDFS issue you referenced is in regards to an hbase smoke test. Can you confirm that the hdfs services are running properly? Ideally, before any of the smoke tests for hbase and mapreduce to go through successfully, HDFS services has to be up and running. To confirm if HDFS is up and running fine, visit the namenode webui and confirm it is not in safemode and that there are live datanodes and capacity within the cluster.

    Regards,
    Robert

    Collapse
Viewing 4 replies - 1 through 4 (of 4 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.