Home Forums HDFS Possible to set Hortonworks 1.3 on a Single VM?

Tagged: ,

This topic contains 5 replies, has 4 voices, and was last updated by  Seth Lyubich 1 year ago.

  • Creator
    Topic
  • #30299

    yamei gu
    Member

    Hi,

    I am new to Hortonworks and have followed the instruction to install v1.3. However, for experiment purpose, I only got 1 VM to set up with it. And everytime tried to start datanode, “/usr/lib/hadoop/bin/hadoop-daemon.sh –config $HADOOP_CONF_DIR start datanode”, failed.

    Is that a limitation with V1.3? Since I tried with V2.0alpha before, and it was fine.

    Thanks,
    YG

Viewing 5 replies - 1 through 5 (of 5 total)

The topic ‘Possible to set Hortonworks 1.3 on a Single VM?’ is closed to new replies.

  • Author
    Replies
  • #30490

    Seth Lyubich
    Keymaster

    Hi Yamei,

    Thanks for letting us know that you resolved your issue.

    Thanks,
    Seth

    Collapse
    #30451

    yamei gu
    Member

    Thanks Robert.

    I did find the right log file, and it turns out that datanode didn’t start because of old cache file. “Directory is in an inconsistent state”

    Find the solution http://stackoverflow.com/questions/11021786/hdfs-data-directory-is-in-an-inconsistent-state-is-incompatible-with-others

    Collapse
    #30403

    Robert
    Participant

    Hi yamei,
    The datanode logs which are normally located in /var/log/hadoop/hdfs/ should have some message as to why it is not starting. Can you provide the bottom of the file where it states datanode stops? There are numerous reasons which prevent datanode to not start. Majority of the time is that the file locations it needs to access do not have the proper permissions. But start off the log and see if it gives any hints.

    Hope that helps.
    Regards,
    Robert

    Collapse
    #30395

    yamei gu
    Member

    Thanks Sasha.

    What I am trying to do is following the RPM install guide to get myself familiar with the architecture. However, the datanode just doesn’t start, and I am wondering how could I debug why, since there is not much information in the log:

    limit -a for user hdfs
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 23976
    max locked memory (kbytes, -l) 64
    max memory size (kbytes, -m) unlimited
    open files (-n) 32768
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) 10240
    cpu time (seconds, -t) unlimited
    max user processes (-u) 65536
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    And in the NameNode webpage, I saw the cluster summary as following:
    Cluster Summary
    5 files and directories, 0 blocks = 5 total. Heap Size is 960 MB / 960 MB (100%)
    Configured Capacity : 0 KB DFS Used : 0 KB Non DFS Used : 0 KB DFS Remaining : 0 KB DFS Used% : 100 % DFS Remaining% : 0 % Live Nodes : 0 Dead Nodes : 0 Decommissioning Nodes : 0 Number of Under-Replicated Blocks : 0

    Would you suggest why the datanode is not starting?

    Thanks,

    Collapse
    #30328

    Sasha J
    Moderator

    Yamei,
    Yes, it is possible.
    but you should have good amount of memory for this (at least 4Gb).
    Also, use Ambari to install cluster, it will make all needed configurations and start all the services for you.

    Thank you!
    Sasha

    Collapse
Viewing 5 replies - 1 through 5 (of 5 total)