Home Forums HDP on Linux – Installation Name node is in safe mode

This topic contains 4 replies, has 2 voices, and was last updated by  Jayashankar VS 10 months ago.

  • Creator
    Topic
  • #27792

    Jayashankar VS
    Participant

    When I try to do the local copy ( hadoop dfs -copyFromLocal /etc/passwd passwd) the below error is thrown.

    copyFromLocal: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create /user/hdfs/passwd. Name node is in safe mode.

    2013-06-19 22:08:22,843 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create /user/hdfs/passwd. Name node is in safe mode.
    The reported blocks is only 0 but the threshold is 1.0000 and the total blocks 87. Safe mode will be turned off automatically.
    2013-06-19 22:08:22,846 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020, call create(/user/hdfs/passwd, rwx——, DFSClient_NONMAPREDUCE_87529645_1, false, 3, 134217728) from 192.168.66.133:43307: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create /user/hdfs/passwd. Name node is in safe mode.
    The reported blocks is only 0 but the threshold is 1.0000 and the total blocks 87. Safe mode will be turned off automatically.
    org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create /user/hdfs/passwd. Name node is in safe mode.
    The reported blocks is only 0 but the threshold is 1.0000 and the total blocks 87. Safe mode will be turned off automatically.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1297)

Viewing 4 replies - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #28072

    Jayashankar VS
    Participant

    Hi Ted,
    Sorry, it took a while for me to respond to you,thanks so much for your help.

    Thanks,
    Jaya

    Collapse
    #27839

    tedr
    Moderator

    Hi Jaya,

    The NameNode will always start in safe mode, but will usually leave it automatically once the DataNode starts giving it’s reports. on a single node system quite often the NameNode will stay in safe mode even after the DataNode is reporting because the report contains too high a number of under replicated blocks. The default configuration for the number of replicas is 3 and on a single node system the blocks will only be written once. So it will look to the NameNode as if the blocks are under replicated which when the number of such blocks reaches a certain threshold the Name Node won’t leave safe mode. This is a bug with Ambari that I’ll log. In the mean time however you can apply a work around by reducing the replication factor to 1.

    Thanks,
    Ted.

    Collapse
    #27821

    Jayashankar VS
    Participant

    Hi Ted,
    Thanks for the response, that’s exactly I am trying to find out why the Name Node went to the safe mode. I haven’t changed anything it’s just that the virtual machine was shutdown previously and next time I started the name node and found it to be starting in the safe mode (org.apache.hadoop.hdfs.StateChange: safe mode). I can force the name node to leave the safemode, but that’s not what I am looking for.

    Will find the RCA.

    Thanks,
    Jaya

    Collapse
    #27798

    tedr
    Moderator

    Hi Jayashankar,

    You should try to find out why your NameNode is in safe mode, but if this is on a single node cluster you can turn off safe mode manually by executing the command “hadoop dfsadmin -safemode leave”

    Thanks,
    Ted.

    Collapse
Viewing 4 replies - 1 through 4 (of 4 total)