What is the scenario you are facing? Is is a namenode directory corruption? In the non-HA case for HDP 1.3.x, if it is a namenode directory corruption you can try this:
1. If you have more that one NameNode directory copies on your NameNode machine, one of those could be valid. In this case, remove the corrupted one from the list of NameNode data dirs via dfs.name.dir property in hdfs-site.xml and start NameNode. It should start normally.
2. If there is no valid copy of NameNode data directory on NameNode machine, you should use Secondary NameNode to perform recovery:
The appropriate way to recover from the checkpoint copies is to do the following:
1. Create dfs.name.dir on your new NameNode (it must be empty)
2. Make sure fs.checkpoint.dir is pointed at your last known good copy
3. Start the NameNode with the -importCheckpoint option
4. The NameNode will verify that the files in fs.checkpoint.dir are consistent and create a new copy of the FsImage and EditLog in dfs.name.dir.
Your NameNode should start functioning again and will exit safemode once the appropriate number of blocks have been reported. The NameNode will not alter the files in fs.checkpoint.dir.