Possible to set Hortonworks 1.3 on a Single VM?

to create new topics or reply. | New User Registration

Tagged: ,

This topic contains 5 replies, has 4 voices, and was last updated by  Seth Lyubich 1 year, 8 months ago.

  • Creator
  • #30299

    yamei gu


    I am new to Hortonworks and have followed the instruction to install v1.3. However, for experiment purpose, I only got 1 VM to set up with it. And everytime tried to start datanode, “/usr/lib/hadoop/bin/hadoop-daemon.sh –config $HADOOP_CONF_DIR start datanode”, failed.

    Is that a limitation with V1.3? Since I tried with V2.0alpha before, and it was fine.


Viewing 5 replies - 1 through 5 (of 5 total)

The topic ‘Possible to set Hortonworks 1.3 on a Single VM?’ is closed to new replies.

  • Author
  • #30490

    Seth Lyubich

    Hi Yamei,

    Thanks for letting us know that you resolved your issue.



    yamei gu

    Thanks Robert.

    I did find the right log file, and it turns out that datanode didn’t start because of old cache file. “Directory is in an inconsistent state”

    Find the solution http://stackoverflow.com/questions/11021786/hdfs-data-directory-is-in-an-inconsistent-state-is-incompatible-with-others



    Hi yamei,
    The datanode logs which are normally located in /var/log/hadoop/hdfs/ should have some message as to why it is not starting. Can you provide the bottom of the file where it states datanode stops? There are numerous reasons which prevent datanode to not start. Majority of the time is that the file locations it needs to access do not have the proper permissions. But start off the log and see if it gives any hints.

    Hope that helps.


    yamei gu

    Thanks Sasha.

    What I am trying to do is following the RPM install guide to get myself familiar with the architecture. However, the datanode just doesn’t start, and I am wondering how could I debug why, since there is not much information in the log:

    limit -a for user hdfs
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 23976
    max locked memory (kbytes, -l) 64
    max memory size (kbytes, -m) unlimited
    open files (-n) 32768
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) 10240
    cpu time (seconds, -t) unlimited
    max user processes (-u) 65536
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    And in the NameNode webpage, I saw the cluster summary as following:
    Cluster Summary
    5 files and directories, 0 blocks = 5 total. Heap Size is 960 MB / 960 MB (100%)
    Configured Capacity : 0 KB DFS Used : 0 KB Non DFS Used : 0 KB DFS Remaining : 0 KB DFS Used% : 100 % DFS Remaining% : 0 % Live Nodes : 0 Dead Nodes : 0 Decommissioning Nodes : 0 Number of Under-Replicated Blocks : 0

    Would you suggest why the datanode is not starting?



    Sasha J

    Yes, it is possible.
    but you should have good amount of memory for this (at least 4Gb).
    Also, use Ambari to install cluster, it will make all needed configurations and start all the services for you.

    Thank you!

Viewing 5 replies - 1 through 5 (of 5 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.