The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Hue Forum

Error: Cannot create home directory

  • #49393
    sean mikha

    I just installed HDP 2.0.6 through Ambari Repo version
    I am running on CentOS6 , 3 Large node cluster on Amazon AWS

    I installed everything and all the Hadoop services are running. All the service checks pass separately.

    When I go to login to HUE for the first time and create user I get a ‘cannot create home directory’ error.
    Also when I have HUE check for configuration issues, it says that webhdfs is not working properly.

  • Author
  • #49396

    Hi Sean,

    Did you use Ambari to install your cluster?
    What is the following set in your hue.ini:


    Can you ensure that the server is listening on 50070



    sean mikha

    Yes I used ambari to install the cluster.
    I changed that line in hue.ini to read the name of the public dns address for the namenode (which hue is also installed on).

    netstat -tupln | grep 50070
    shows java is listening on that port (I assume webhdfs)


    Hi Sean,

    Have you configured your Hosts file correctly on each of your AWS hosts?
    Check here ( under “Setup Hosts”

    I have seen this before where the hosts file was untouched and caused the rooting of requests to webhdfs to fail.

    After changing the hosts file(s) you will need to restart the services.



    sean mikha

    Hi Dave,
    At the beginning of my installation process, before I install Hadoop through Ambari. I edit the hosts file for each name node and data node in the cluster to include the following for all nodes in the cluster:

    internal-ip fqdn alias

    I get the internal-ip by executing hostname -i on each node, and I get the fqdn by executing hostname -f on each node. The alias is h1 for namenode, n1, n2 for datanode 1 and 2 respectively. This is essentially the same as what you sent me in the link.

    sean mikha

    Ok I figured it out.
    When I checked the HUE server logs I found:
    WebHdfsException: SecurityException: Failed to obtain user group information: User: hue is not allowed to impersonate hdfs (error 401)

    When I looked at my core-site.xml file for HDFS I found that I had put Hue (and not lowercase hue) for the following values:
    for proxy user permission. Check your case sensitivity people! arg :(*

The forum ‘Hue’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.