HDFS Forum

Possible to set Hortonworks 1.3 on a Single VM?

  • #30299
    yamei gu
    Member

    Hi,

    I am new to Hortonworks and have followed the instruction to install v1.3. However, for experiment purpose, I only got 1 VM to set up with it. And everytime tried to start datanode, “/usr/lib/hadoop/bin/hadoop-daemon.sh –config $HADOOP_CONF_DIR start datanode”, failed.

    Is that a limitation with V1.3? Since I tried with V2.0alpha before, and it was fine.

    Thanks,
    YG

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #30328
    Sasha J
    Moderator

    Yamei,
    Yes, it is possible.
    but you should have good amount of memory for this (at least 4Gb).
    Also, use Ambari to install cluster, it will make all needed configurations and start all the services for you.

    Thank you!
    Sasha

    #30395
    yamei gu
    Member

    Thanks Sasha.

    What I am trying to do is following the RPM install guide to get myself familiar with the architecture. However, the datanode just doesn’t start, and I am wondering how could I debug why, since there is not much information in the log:

    limit -a for user hdfs
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 23976
    max locked memory (kbytes, -l) 64
    max memory size (kbytes, -m) unlimited
    open files (-n) 32768
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) 10240
    cpu time (seconds, -t) unlimited
    max user processes (-u) 65536
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    And in the NameNode webpage, I saw the cluster summary as following:
    Cluster Summary
    5 files and directories, 0 blocks = 5 total. Heap Size is 960 MB / 960 MB (100%)
    Configured Capacity : 0 KB DFS Used : 0 KB Non DFS Used : 0 KB DFS Remaining : 0 KB DFS Used% : 100 % DFS Remaining% : 0 % Live Nodes : 0 Dead Nodes : 0 Decommissioning Nodes : 0 Number of Under-Replicated Blocks : 0

    Would you suggest why the datanode is not starting?

    Thanks,

    #30403
    Robert
    Participant

    Hi yamei,
    The datanode logs which are normally located in /var/log/hadoop/hdfs/ should have some message as to why it is not starting. Can you provide the bottom of the file where it states datanode stops? There are numerous reasons which prevent datanode to not start. Majority of the time is that the file locations it needs to access do not have the proper permissions. But start off the log and see if it gives any hints.

    Hope that helps.
    Regards,
    Robert

    #30451
    yamei gu
    Member

    Thanks Robert.

    I did find the right log file, and it turns out that datanode didn’t start because of old cache file. “Directory is in an inconsistent state”

    Find the solution http://stackoverflow.com/questions/11021786/hdfs-data-directory-is-in-an-inconsistent-state-is-incompatible-with-others

    #30490
    Seth Lyubich
    Moderator

    Hi Yamei,

    Thanks for letting us know that you resolved your issue.

    Thanks,
    Seth

The topic ‘Possible to set Hortonworks 1.3 on a Single VM?’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.