Hue Forum

Error: Cannot create home directory

  • #49393
    sean mikha
    Participant

    Hi,
    I just installed HDP 2.0.6 through Ambari Repo version 1.4.4.23
    I am running on CentOS6 , 3 Large node cluster on Amazon AWS

    I installed everything and all the Hadoop services are running. All the service checks pass separately.

    When I go to login to HUE for the first time and create user I get a ‘cannot create home directory’ error.
    Also when I have HUE check for configuration issues, it says that webhdfs is not working properly.

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #49396
    Dave
    Moderator

    Hi Sean,

    Did you use Ambari to install your cluster?
    What is the following set in your hue.ini:

    webhdfs_url=http://servername.domain:50070/webhdfs/v1/

    Can you ensure that the server is listening on 50070

    Thanks

    Dave

    #49471
    sean mikha
    Participant

    Yes I used ambari to install the cluster.
    I changed that line in hue.ini to read the name of the public dns address for the namenode (which hue is also installed on).

    netstat -tupln | grep 50070
    shows java is listening on that port (I assume webhdfs)

    #49845
    Dave
    Moderator

    Hi Sean,

    Have you configured your Hosts file correctly on each of your AWS hosts?
    Check here (http://hortonworks.com/kb/ambari-on-ec2/) under “Setup Hosts”

    I have seen this before where the hosts file was untouched and caused the rooting of requests to webhdfs to fail.

    After changing the hosts file(s) you will need to restart the services.

    Thanks

    Dave

    #50267
    sean mikha
    Participant

    Hi Dave,
    At the beginning of my installation process, before I install Hadoop through Ambari. I edit the hosts file for each name node and data node in the cluster to include the following for all nodes in the cluster:

    internal-ip fqdn alias

    I get the internal-ip by executing hostname -i on each node, and I get the fqdn by executing hostname -f on each node. The alias is h1 for namenode, n1, n2 for datanode 1 and 2 respectively. This is essentially the same as what you sent me in the link.

    #50271
    sean mikha
    Participant

    Ok I figured it out.
    When I checked the HUE server logs I found:
    WebHdfsException: SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hdfs (error 401)

    When I looked at my core-site.xml file for HDFS I found that I had put Hue (and not lowercase hue) for the following values:
    hadoop.proxyuser.hue.hosts
    for proxy user permission. Check your case sensitivity people! arg :(*

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.