Home Forums HDP on Linux – Installation Starting the services fails on HDP2

This topic contains 9 replies, has 2 voices, and was last updated by  Dave 5 months, 3 weeks ago.

  • Creator
    Topic
  • #41979

    Installed HDP2 thru Ambari on RHEL5.8, Installed completed with warnings, while starting the services
    As it completed with warning messages, I clicked Next and complete and went into Dashboard,
    Tried to start the HDFS manually thru the web GUI..But I am getting the below error.
    Do I have to set permissions , if so for what id ?

    notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Exec[delete_pid_before_datanode_start]/returns: executed successfully
    notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: mkdir: `/var/log/hadoop/hdfs’: Permission denied
    notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: chown: cannot access `/var/log/hadoop/hdfs’: Permission denied
    notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-if21t01ha.out
    notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: /usr/lib/hadoop/sbin/hadoop-daemon.sh: line 151: /var/log/hadoop/hdfs/hadoop-hdfs-datanode-if21t01ha.out: Permission denied

    Can someone please advice ?

Viewing 9 replies - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #42231

    Dave
    Moderator

    Hi,

    You need to run this command as hdfs:

    hadoop namenode -format

    Thanks

    Dave

    Collapse
    #42197

    Dave,
    How do I format namenode.. its not starting up..

    2013-10-28 21:48:07,837 FATAL namenode.NameNode (NameNode.java:main(1325)) – Exception in namenode join
    java.io.IOException: NameNode is not formatted.

    2013-10-28 21:48:07,378 WARN common.Util (Util.java:stringAsURI(56)) – Path /opt/openv/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    2013-10-28 21:48:07,378 WARN common.Util (Util.java:stringAsURI(56)) – Path /var/log/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    2013-10-28 21:48:07,378 WARN common.Util (Util.java:stringAsURI(56)) – Path /opt/teamquest/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    2013-10-28 21:48:07,378 WARN common.Util (Util.java:stringAsURI(56)) – Path /tech/hadoop/hdfs/namenode should be specified as a URI in configuration files.Please update hdfs configuration.
    2013-10-28 21:48:07,379 WARN common.Util (Util.java:stringAsURI(56)) – Path /tech/sys/hadoop/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    2013-10-28 21:48:07,379 WARN common.Util (Util.java:stringAsURI(56)) – Path /opt/openv/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.

    Collapse
    #42078

    I tried executing su -l hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode” as sudo
    sudo su -l hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode”

    But it did not start the namenode…
    I see on the web console… under namenode after install..

    err: Could not apply complete catalog: Found 1 dependency cycle:
    (Hdp-hadoop::Service[namenode] => Hdp::Directory_recursive_create[/var/log/hadoop/hdfs] => Hdp::Directory[/var/log/hadoop/hdfs] => File[/var/log/hadoop/hdfs] => File[/var/log/hadoop/hdfs/namenode] => Hdp::Directory[/var/log/hadoop/hdfs/namenode] => Hdp::Directory_recursive_create[/var/log/hadoop/hdfs/namenode] => Hdp-hadoop::Namenode::Create_name_dirs[/opt/openv/hadoop/hdfs/namenode,/var/log/hadoop/hdfs/namenode,/opt/teamquest/hadoop/hdfs/namenode,/tech/hadoop/hdfs/namenode,/tech/sys/hadoop/hadoop/hdfs/namenode] => Hdp-hadoop::Service[namenode])
    Try the ‘–graph’ option and opening the resulting ‘.dot’ file in OmniGraffle or GraphViz
    notice: Finished catalog run in 0.41 seconds

    It does not say it failed..but it not create the namenode dirs under [/opt/openv/hadoop/hdfs/namenode,/var/log/hadoop/hdfs/namenode,/opt/teamquest/hadoop/hdfs/namenode,/tech/hadoop/hdfs/namenode,/tech/sys/hadoop/hadoop/hdfs/namenode

    Is this a major issue , do I need to install everything again? Why is the namenode dir not created. Or is it not created because I did not start the namenode ?
    What does this mean ?
    [/opt/openv/hadoop/hdfs/namenode,/var/log/hadoop/hdfs/namenode,/opt/teamquest/hadoop/hdfs/namenode,/tech/hadoop/hdfs/namenode,/tech/sys/hadoop/hadoop/hdfs/namenode

    I am really frustrated and confused.. Please advice

    Thanks

    Collapse
    #42063

    Please see my below response as well..
    I tried to start the name node as
    su -l hdfs -c “/usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode”

    Its asking for a pwd for hdfs.. whats the default pwd because I dont recall setting one during installation.

    Collapse
    #42055

    Dave,
    Yes You are right, yes the name node is not started, My install completed with warnings and failed to start due to some persmission issues to /var/log dir.
    So I went into dashboard and after fixing the permission issue, I am trying to start the HDFS from Web, and thats where its failing
    Do I need to change some configurations after install? If so what ? Can I try to start from cmd line from the host ? if so how can I try that ?

    ************************************************************/
    2013-10-28 11:31:53,405 INFO datanode.DataNode (SignalLogger.java:register(91)) – registered UNIX signal handlers for [TERM, HUP, INT]
    2013-10-28 11:31:53,534 WARN common.Util (Util.java:stringAsURI(56)) – Path /opt/openv/hadoop/hdfs/data should be specified as a URI in configuration files.
    Please update hdfs configuration.
    2013-10-28 11:31:53,535 WARN common.Util (Util.java:stringAsURI(56)) – Path /var/log/hadoop/hdfs/data should be specified as a URI in configuration files. P
    lease update hdfs configuration.
    2013-10-28 11:31:53,535 WARN common.Util (Util.java:stringAsURI(56)) – Path /opt/teamquest/hadoop/hdfs/data should be specified as a URI in configuration fi
    les. Please update hdfs configuration.
    2013-10-28 11:31:53,535 WARN common.Util (Util.java:stringAsURI(56)) – Path /tech/hadoop/hdfs/data should be specified as a URI in configuration files. Plea
    se update hdfs configuration.
    2013-10-28 11:31:53,535 WARN common.Util (Util.java:stringAsURI(56)) – Path /tech/sys/hadoop/hadoop/hdfs/data should be specified as a URI in configuration
    files. Please update hdfs configuration.
    2013-10-28 11:31:53,918 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) – loaded properties from hadoop-metrics2.properties
    2013-10-28 11:31:53,935 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(193)) – Sink ganglia started
    2013-10-28 11:31:53,972 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(344)) – Scheduled snapshot period at 10 second(s).
    2013-10-28 11:31:53,973 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) – DataNode metrics system started
    2013-10-28 11:31:53,975 INFO datanode.DataNode (DataNode.java:(247)) – File descriptor passing is enabled.
    2013-10-28 11:31:53,975 INFO datanode.DataNode (DataNode.java:(258)) – Configured hostname is if21t01ha.newyorklife.com
    2013-10-28 11:31:53,994 INFO datanode.DataNode (DataNode.java:initDataXceiver(482)) – Opened streaming server at /0.0.0.0:50010
    2013-10-28 11:31:53,997 INFO datanode.DataNode (DataXceiverServe

    2013-10-28 11:31:55,559 INFO ipc.Client (Client.java:handleConnectionFailure(783)) – Retrying connect to server: if21t01ha.newyorklife.com/10.66.31.5:8020.
    Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS)

    Collapse
    #42054

    Dave
    Moderator

    Hi,

    On the machine 10.66.31.5 is it listening on port 8020 ? (netstat -anp | grep 8020)
    If not, then your namenode has not started – check the log in /var/log/hadoop/hdfs and see why it is failing.
    It may be that it needs formatting.

    Thanks

    Dave

    Collapse
    #42051

    I have the firewall stopped before the install.

    [t15bw5h@if21t01ha ~]$ service iptables status
    Firewall is stopped.
    I am not sure what you mean by server is listening.. Also why is the server going out of network to connect to itself
    Its failing to connect to itself?

    Please advice

    Collapse
    #42026

    Dave
    Moderator

    Hi,

    I would check your firewall and also ensure the server is listening:

    Call From if21t01ha.newyorklife.com/10.66.31.5 to if21t01ha.newyorklife.com:8020 failed on connection exception: java.net.ConnectException: Connection refused;

    Thanks

    Dave

    Collapse
    #42005

    I fixed this problem by opening up the log dir for my id.. and the Data Node started fine but HDFS Check Execute Failed on the NameNode server.
    So HDFS says its down.. Here is the log, Any insight into this will be really helpful.

    stderr:
    None
    stdout:
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/returns: DEPRECATED: Use of this script to execute hdfs command is deprecated.
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/returns: Instead use the hdfs command for it.
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/returns:
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/returns: safemode: Call From if21t01ha.newyorklife.com/10.66.31.5 to if21t01ha.newyorklife.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
    err: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/returns: change from notrun to 0 failed: hadoop –config /etc/hadoop/conf dfsadmin -safemode get | grep OFF returned 1 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:480
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::check_safemode]/Hdp::Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF::end]: Dependency Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF] has failures: true
    notice: /Stage[2]/Hdp-hadoop::Hdfs::Service_check/Hdp-hadoop::Exec-hadoop[hdfs::service_check::create_dir]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -mkdir /tmp ; hadoop fs -chmod -R 777 /tmp]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -mkdir /tmp ; hadoop fs -chmod -R 777 /tmp::begin]: Dependency Exec[hadoop --config /etc/hadoop/conf dfsadmin -safemode get | grep OFF] has failures: true

    Collapse
Viewing 9 replies - 1 through 9 (of 9 total)