Home Forums HDP on Linux – Installation Accidental start of HDFS using super user (root), now need to fix permissions

This topic contains 3 replies, has 2 voices, and was last updated by  Rupert Bailey 8 months ago.

  • Creator
    Topic
  • #46220

    Rupert Bailey
    Participant

    Guys I made a mistake trouble shooting my system and executed
    /usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode
    /usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start secondarynamenode

    as the root user – I know this is dumb, I mucked up the “sudo su” command…
    Anyway now running this command:
    /usr/lib/hadoop/bin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode
    calling the command:
    nohup nice -n 0 “/usr/lib/hadoop/libexec/..”/bin/hadoop –config /etc/hadoop/conf namenode
    calling the java class:
    org.apache.hadoop.hdfs.server.namenode.NameNode

    causes session configuration files to now get spawned as owned by root user:
    398609 4 drwxr-xr-x 2 root root 4096 Dec 29 19:11 /hadoop/hdfs/namenode/current
    398658 4 -rw-r–r– 1 root root 101 Dec 29 19:11 /hadoop/hdfs/namenode/current/VERSION
    398652 4 -rw-r–r– 1 root root 8 Dec 29 19:11 /hadoop/hdfs/namenode/current/fstime
    398648 36 -rw-r–r– 1 root root 34320 Dec 29 19:11 /hadoop/hdfs/namenode/current/fsimage
    398649 4 -rw-r–r– 1 root root 4 Dec 29 19:11 /hadoop/hdfs/namenode/current/edits
    397450 4 drwxr-xr-x 2 root root 4096 Dec 29 18:01 /hadoop/hdfs/namesecondary/current
    399238 4 -rw-r–r– 1 root root 101 Dec 29 18:01 /hadoop/hdfs/namesecondary/current/VERSION
    399237 4 -rw-r–r– 1 root root 8 Dec 29 18:01 /hadoop/hdfs/namesecondary/current/fstime
    399230 36 -rw-r–r– 1 root root 34320 Dec 29 18:01 /hadoop/hdfs/namesecondary/current/fsimage
    399236 4 -rw-r–r– 1 root

    The following commands are definitely being run by the “hdfs” user and even the submitted java arguments included refer to the HDFS user
    -Dhadoop.log.dir=/var/log/hadoop/hdfs
    -Dhadoop.log.file=hadoop-hdfs-namenode-master7.localdomain.log
    -Dhadoop.home.dir=/usr/lib/hadoop/libexec/..
    -Dhadoop.id.str=hdfs

    This causes permission conflicts. Rather that a complete re-install, is there a simple tweak to fix this?

    System Centos 6.4 graphical desktop HDP 1.3.3 single node instance.

Viewing 3 replies - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #46234

    Rupert Bailey
    Participant

    Thanks for replying Dave,
    Unfortunately the problem is that they are clobbered afterwards by root:root ownership by some other setting within the system. So even when they get adjusted with the chown statement they get re-owned by root, thence problem re-occurs.

    Some further information, given this is a VM rolled from a Linux host, converted to an OVA file and thence moved to a Windows host with a different subnet, changing the eth0 address value seems to indicate that there are other problems, as I re-installed the VM and tried to start and I know I stated it correctly through Ambari. So it can’t be just that I started it as root.

    I will need to close this question as I think the problem lies elsewhere and we might end up chasing the wrong path. I’ll try to return with the answer later but it seems it’s a problem with moving VM’s instead. HDFS was actually working but Ambari was indicating that it wasn’t, failing at starting Secondary Name Node. Not sure if SNN references alternative address methods (IPV6?).

    Thanks again for your assistance Dave.

    Collapse
    #46230

    Dave
    Moderator

    Hi Rupert,

    You can run:

    chown -R hdfs:hadoop /hadoop/hdfs/namenode
    chown -R hdfs:hadoop /hadoop/hdfs/namesecondary
    chown -R hdfs:hadoop /hadoop/hdfs/data (if you have a datanode installed)

    Let me know how you get on,

    Thanks

    Dave

    Collapse
    #46229

    Dave
    Moderator

    Hi Rupert,

    You can run:

    chown -R hdfs:hadoop /hadoop/hdfs/namenode
    chown -R hdfs:hadoop /hadoop/hdfs/namesecondary
    chown -R hdfs:hadoop /hadoop/hdfs/data (if you have a datanode installed)

    Let me know how you get on,

    Thanks

    Dave

    Collapse
Viewing 3 replies - 1 through 3 (of 3 total)