Hdp Ambari dashboard showing wrong information

to create new topics or reply. | New User Registration

Tagged: , ,

This topic contains 3 replies, has 3 voices, and was last updated by  Rupert Bailey 1 year, 1 month ago.

  • Creator
  • #23633

    pavan tiwari

    Hi ,

    I have installed HDP on amazon machine, Hadoop services are running fine in my machine (./start-all.sh)

    But Dashboard is showing namenode is not running.

    When i started it with dashboard i am getting following error

    /Stage[2]/Hdp-hadoop::Namenode/Hdp-hadoop::Namenode::Create_app_directories[create_app_directories]/Hdp-hadoop::Hdfs::Directory[/apps/hbase/data]/Hdp-hadoop::Exec-hadoop[fs -chown hbase /apps/hbase/data]/Hdp::Exec[hadoop –config /etc/hadoop/conf fs -chown hbase /apps/hbase/data]/Anchor[hdp::exec::hadoop –config /etc/hadoop/conf fs -chown hbase /apps/hbase/data::end]: Skipping because of failed dependencies
    notice: /Stage[2]/Hdp-hadoop::Namenode/Hdp-hadoop::Namenode::Create_app_directories[create_app_directories]/Hdp-hadoop::Hdfs::Directory[/tmp]/Hdp-hadoop::Exec-hadoop[fs -chown hdfs /tmp]/Hdp::Exec[hadoop –config /etc/hadoop/conf fs -chown hdfs /tmp]/Exec[hadoop –config /etc/hadoop/conf fs -chown hdfs /tmp]/returns: executed successfully

Viewing 3 replies - 1 through 3 (of 3 total)

You must be to reply to this topic. | Create Account

  • Author
  • #48337

    Rupert Bailey

    Thanks tedr – it seems start-all.sh is legacy and must not be used anymore at all. Starting this script as root also cause the ownership to be incorrect for critical files that stops Ambari being able to start certain services ever again. After having a “complete” install I executed start-all.sh and several services were rendered inoperable by Ambari. Namely HBASE, HIVE and HDFS also I think MapReduce was also broken.

    After doing an ownership compare before and after running the script I managed to get all but HDFS to present nicely in Ambari – HDFS starts but very quickly the display of the services drops to “down” although it IS actually running.

    chown -R hdfs:hadoop /hadoop/hdfs/* /var/log/hadoop/hdfs
    chown -R mapred:hadoop /hadoop/mapred /hadoop/mapred/taskTracker/ambari-qa /hadoop/mapred/userlogs/* /hadoop/mapred/taskTracker/distcache /hadoop/mapred/ttprivate

    these are other things I’ve trued but these didn’t help present HDFS as live on Ambari
    rm -r /var/log/hadoop/root /tmp/Jetty_* /tmp/hsperfdata*
    rm -r /var/run/hadoop/root
    chgrp -R hadoop /proc
    chmod -R a+w /proc
    chmod g+w,o+w /var/run/ganglia/hdp/*
    chmod -R g+w,o+w /var/run/ganglia/hdp/*

    Any other things I could do to get HDFS to present on Ambari? Otherwise it seems a fresh install is required – surely not right?



    Hi Pavan,

    Thanks for using Hortonworks Data Platform.

    Can you verify the the namenode is indeed running by executing the following command?


    If the NameNode is running there should be ‘NameNode’ listed.

    On another note: It is best to avoid using the ‘start-all.sh’ script to start the hadoop services on an HDP installation, it will start the services as the ‘root’ user and change the permissions on several directories to be owned by root. The best way to start the hadoop services manually is with the ‘hadoop-daemon.sh’ script like this:
    su – -c ‘/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start’
    where ‘hadoop-user’ is either ‘hdfs’ or ‘mapred’ depending on which of the hadoop daemons you are launching: namenode and datanodes use hdfs, jobtracker and tasktrackers use mapred.

    But overall on a system that it install using Ambari it is ultimately best to start all of the service through the Ambari UI as that will use the proper user for each service. At this point since you did use ‘start-all.sh’ you will need to find the directories that have been changed to the incorrect owner and chown them back the the correct owner.



    pavan tiwari

    Ho can i fix this problem.

Viewing 3 replies - 1 through 3 (of 3 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.