Home Forums HDFS How easily understand hadoop and their nodes?

This topic contains 3 replies, has 4 voices, and was last updated by  Robert 1 year, 4 months ago.

  • Creator
    Topic
  • #20800

    eli vani
    Member

    Hi,
    i am doing my project in hadoop. still i can’t properly understand what is hadoop and their nodes(name node.data node.task traker,job tracker).please Can you give me the easy explanation for this?it will helpful for my carrier..Thank you…

Viewing 3 replies - 1 through 3 (of 3 total)

The topic ‘How easily understand hadoop and their nodes?’ is closed to new replies.

  • Author
    Replies
  • #30080

    Robert
    Participant

    Hi Member,
    The jobtracker will only assign a job to one tasktracker that is on the same node as the block it needs to work on. As far as the whole mapreduce process, Hortonworks provides a developer course training that covers that process. If interested in taking the course, here is a link:

    http://hortonworks.com/hadoop-training/hadoop-training-for-developers/

    Kind Regards,
    Robert

    Collapse
    #30062

    Good Boy
    Member

    Hi Ted,

    This is really a nice and the easiest explanation. Thanks a lot.
    I have some doubts. Could you please clear them.
    As you have mentioned that “each block of data is replicated a number of times (usually 3) and these replicas are placed on different DataNodes” and “the job tracker will find out the task trackers closest to the data blocks that we need to process our job”, then will the job tracker send the task to all the task trackers closest to all the replicas of a data block (usually 3 as you have mentioned earlier) or to any one of them..?

    Could you please explain how the Map/reduce is processed, that will be a great help.

    Thanks in advance

    Collapse
    #20829

    tedr
    Member

    Hi Eli,

    Thanks for your question.

    The "easy explanation" follows:

    These are all daemon processes.

    NameNode – Stores the information of on which DataNode the data is stored. This data is broken into blocks and replicated a number of times and then stored on various DataNodes throughout the cluster. There is usually only one of these in a cluster.

    DataNodes – the nodes on which the data is actually stored. As mentioned above it has been split into blocks and each block replicated a number of times (usually 3) and these replicas are placed on different DataNodes to provide a bit of fault tolerance. There can be any number of DataNodes in a cluster, from one to thousands. The number is only limited by the RAM available on the NameNode, the NameNode stores it’s working data in RAM. The DataNodes store the actual data on the harddrives.

    JobTracker – When a job for analyzing this data is submitted this daemon talks to the NameNode to figure out which nodes the data is stored on so that it can split this job into tasks and then send the task to the TaskTracker that is closest to the actual data being processed. There is also usually only one JobTracker per cluster.

    TaskTracker – these daemons are sent individual tasks to perform by the JobTracker. They perform this task and send the results back to the JobTracker. There can, and should be, as many TaskTrackers in a cluster as there are DataNodes. A typical Hadoop cluster will have a TaskTracker running on each DataNode. This is so that under ideal circumstances the data will be processed by the computer/node on which it resides thus reducing the need to transfer data around to be processed.

    I hope this helps your understanding of Hadoop.
    Ted.

    Collapse
Viewing 3 replies - 1 through 3 (of 3 total)