[hdp 1.3]Can I install it at different drives for different nodes?

to create new topics or reply. | New User Registration

This topic contains 0 replies, has 1 voice, and was last updated by  Perry Pang 1 year, 6 months ago.

  • Creator
    Topic
  • #47507

    Perry Pang
    Participant

    Yestoday I built 4 nodes hdp cluster on windows server 2008 r2 – 1 name node and 3 data nodes. 3 of them installed on drive D and the other one data node at dirveC . Smoke test passed. However, when running MapReduce job, there is an error information in the log file, although the job completed sucessfully. It looks like that MapReduce task wasn’t passed to the data node installed hdp on drive C even it was a live node.
    please take a look at the log:

    Meta VERSION=”1″ .
    Job JOBID=”job_201401232045_0010″ JOBNAME=”WordCountMapReduce\.jar” USER=”itdasadmin” SUBMIT_TIME=”1390814611547″ JOBCONF=”hdfs://WIN-HLMNQ3E3KNO\.tencent\.com:8020/mapred/staging/itdasadmin/\.staging/job_201401232045_0010/job\.xml” VIEW_JOB=”*” MODIFY_JOB=”*” JOB_QUEUE=”default” WORKFLOW_ID=”” WORKFLOW_NAME=”” WORKFLOW_NODE_NAME=”” WORKFLOW_ADJACENCIES=”” WORKFLOW_TAGS=”” .
    Job JOBID=”job_201401232045_0010″ JOB_PRIORITY=”NORMAL” .
    Job JOBID=”job_201401232045_0010″ LAUNCH_TIME=”1390814615726″ TOTAL_MAPS=”2″ TOTAL_REDUCES=”1″ JOB_STATUS=”PREP” .
    Task TASKID=”task_201401232045_0010_m_000003″ TASK_TYPE=”SETUP” START_TIME=”1390814615826″ SPLITS=”” .
    MapAttempt TASK_TYPE=”SETUP” TASKID=”task_201401232045_0010_m_000003″ TASK_ATTEMPT_ID=”attempt_201401232045_0010_m_000003_0″ START_TIME=”1390814617811″ TRACKER_NAME=”tracker_OA-TEST-TAJIK\.tencent\.com:127\.0\.0\.1/127\.0\.0\.1:44037″ HTTP_PORT=”50060″ LOCALITY=”OFF_SWITCH” AVATAAR=”VIRGIN” .
    MapAttempt TASK_TYPE=”SETUP” TASKID=”task_201401232045_0010_m_000003″ TASK_ATTEMPT_ID=”attempt_201401232045_0010_m_000003_0″ TASK_STATUS=”FAILED” FINISH_TIME=”1390814622184″ HOSTNAME=”OA-TEST-TAJIK\.tencent\.com” ERROR=”” .
    Task TASKID=”task_201401232045_0010_r_000002″ TASK_TYPE=”SETUP” START_TIME=”1390814620986″ SPLITS=”” .
    ReduceAttempt TASK_TYPE=”SETUP” TASKID=”task_201401232045_0010_r_000002″ TASK_ATTEMPT_ID=”attempt_201401232045_0010_r_000002_0″ START_TIME=”1390814622389″ TRACKER_NAME=”tracker_OA-TEST-TAJIK\.tencent\.com:127\.0\.0\.1/127\.0\.0\.1:44037″ HTTP_PORT=”50060″ LOCALITY=”OFF_SWITCH” AVATAAR=”VIRGIN” .
    ReduceAttempt TASK_TYPE=”SETUP” TASKID=”task_201401232045_0010_r_000002″ TASK_ATTEMPT_ID=”attempt_201401232045_0010_r_000002_0″ TASK_STATUS=”FAILED” FINISH_TIME=”1390814626732″ HOSTNAME=”OA-TEST-TAJIK\.tencent\.com” ERROR=”java\.io\.FileNotFoundException: File d:/hdp/data/hdfs/tmp does not exist\.
    at org\.apache\.hadoop\.fs\.RawLocalFileSystem\.getFileStatus(RawLocalFileSystem\.java:427)
    at org\.apache\.hadoop\.fs\.FilterFileSystem\.getFileStatus(FilterFileSystem\.java:254)
    at org\.apache\.hadoop\.mapred\.TaskRunner\.createChildTmpDir(TaskRunner\.java:529)
    at org\.apache\.hadoop\.mapred\.TaskRunner\.setupWorkDir(TaskRunner\.java:796)
    at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:236)

You must be to reply to this topic. | Create Account

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.