Home Forums Hue Error: Cannot create home directory

This topic contains 5 replies, has 2 voices, and was last updated by  sean mikha 4 months, 2 weeks ago.

  • Creator
    Topic
  • #49393

    sean mikha
    Participant

    Hi,
    I just installed HDP 2.0.6 through Ambari Repo version 1.4.4.23
    I am running on CentOS6 , 3 Large node cluster on Amazon AWS

    I installed everything and all the Hadoop services are running. All the service checks pass separately.

    When I go to login to HUE for the first time and create user I get a ‘cannot create home directory’ error.
    Also when I have HUE check for configuration issues, it says that webhdfs is not working properly.

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #50271

    sean mikha
    Participant

    Ok I figured it out.
    When I checked the HUE server logs I found:
    WebHdfsException: SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hdfs (error 401)

    When I looked at my core-site.xml file for HDFS I found that I had put Hue (and not lowercase hue) for the following values:
    hadoop.proxyuser.hue.hosts
    for proxy user permission. Check your case sensitivity people! arg :(*

    Collapse
    #50267

    sean mikha
    Participant

    Hi Dave,
    At the beginning of my installation process, before I install Hadoop through Ambari. I edit the hosts file for each name node and data node in the cluster to include the following for all nodes in the cluster:

    internal-ip fqdn alias

    I get the internal-ip by executing hostname -i on each node, and I get the fqdn by executing hostname -f on each node. The alias is h1 for namenode, n1, n2 for datanode 1 and 2 respectively. This is essentially the same as what you sent me in the link.

    Collapse
    #49845

    Dave
    Moderator

    Hi Sean,

    Have you configured your Hosts file correctly on each of your AWS hosts?
    Check here (http://hortonworks.com/kb/ambari-on-ec2/) under “Setup Hosts”

    I have seen this before where the hosts file was untouched and caused the rooting of requests to webhdfs to fail.

    After changing the hosts file(s) you will need to restart the services.

    Thanks

    Dave

    Collapse
    #49471

    sean mikha
    Participant

    Yes I used ambari to install the cluster.
    I changed that line in hue.ini to read the name of the public dns address for the namenode (which hue is also installed on).

    netstat -tupln | grep 50070
    shows java is listening on that port (I assume webhdfs)

    Collapse
    #49396

    Dave
    Moderator

    Hi Sean,

    Did you use Ambari to install your cluster?
    What is the following set in your hue.ini:

    webhdfs_url=http://servername.domain:50070/webhdfs/v1/

    Can you ensure that the server is listening on 50070

    Thanks

    Dave

    Collapse
Viewing 5 replies - 1 through 5 (of 5 total)