Home Forums HBase Import data from HIVE to HBASE does not work (HDP 2.0 for linux).

This topic contains 2 replies, has 2 voices, and was last updated by  Sergey Sukharev 8 months, 2 weeks ago.

  • Creator
    Topic
  • #48846

    Sergey Sukharev
    Participant

    Hello all!
    HRTNONWORKS 2.0 (HDP 2.0 for linux) with ambari on single node. I use HUE for query. It is not Sandbox.
    Query for insert data intoo tables:
    INSERT OVERWRITE TABLE hbase_job SELECT * FROM job;

    select * from job (it’s hive table) is OK.
    select * from hbase_job (it’s hbase table) is OK

    structure of tables is the same.
    hbase_table:
    CREATE TABLE hbase_job
    (
    key1 int,
    col1 string
    )
    STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler’ WITH SERDEPROPERTIES (“hbase.columns.mapping” = “:key,cf1:c1″)
    TBLPROPERTIES(“hbase.table.name” = “hbase_job”);
    Table “job” imported from MS SQL. It has two columns integer (int) and straing.

    Error in log on HUE window is (it’s first error in log):

    INFO exec.Task: Starting Job = job_1392703920316_0011, Tracking URL = http://hadoop.bipartner.ru:8088/proxy/application_1392703920316_0011/
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1392703920316_0011
    14/02/18 09:49:26 INFO exec.Task: Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1392703920316_0011
    Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
    14/02/18 09:49:31 INFO exec.Task: Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
    14/02/18 09:49:31 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
    2014-02-18 09:49:31,772 Stage-0 map = 0%, reduce = 0%
    14/02/18 09:49:31 INFO exec.Task: 2014-02-18 09:49:31,772 Stage-0 map = 0%, reduce = 0%
    2014-02-18 09:49:50,738 Stage-0 map = 100%, reduce = 0%
    14/02/18 09:49:50 INFO exec.Task: 2014-02-18 09:49:50,738 Stage-0 map = 100%, reduce = 0%
    Ended Job = job_1392703920316_0011 with errors
    14/02/18 09:49:51 ERROR exec.Task: Ended Job = job_1392703920316_0011 with errors
    14/02/18 09:49:51 INFO impl.YarnClientImpl: Killing application application_1392703920316_0011
    14/02/18 09:49:51 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-0 start=1392745765003 end=1392745791856 duration=26853>
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    14/02/18 09:49:51 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    14/02/18 09:49:51 INFO ql.Driver: </PERFLOG method=Driver.execute start=1392745765002 end=1392745791856 duration=26854>
    MapReduce Jobs Launched:
    14/02/18 09:49:51 INFO ql.Driver: MapReduce Jobs Launched:
    14/02/18 09:49:51 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
    Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
    14/02/18 09:49:51 INFO ql.Driver: Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
    Total MapReduce CPU Time Spent: 0 msec
    14/02/18 09:49:51 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
    14/02/18 09:49:51 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
    BeeswaxException(message:Driver returned: 2. Errors: OK

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #48929

    Sergey Sukharev
    Participant

    Hello Dave!

    yarn logs -applicationID application_1392703920316_0011 >> yarnlogs.log
    It gives me the error “options parsing failed: Missing required option: applicationId”

    But, I found error in yarn log:

    2014-02-18 09:49:36,016 FATAL [IPC Server handler 1 on 43653] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1392703920316_0011_m_000000_0 – exited : java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.TableInputFormatBase

    Files in my hive.aux.jars.path (/usr/lib/hive/lib/auxlib):

    guava-12.0.1.jar
    hbase-it-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-testing-util-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-client-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-it-0.96.1.2.0.6.1-101-hadoop2-tests.jar
    hbase-thrift-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-common-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-prefix-tree-0.96.1.2.0.6.1-101-hadoop2.jar
    hive-contrib-0.12.0.2.0.6.1-101.jar
    hbase-common-0.96.1.2.0.6.1-101-hadoop2-tests.jar
    hbase-protocol-0.96.1.2.0.6.1-101-hadoop2.jar
    hive-hbase-handler-0.12.0.2.0.6.1-101.jar
    hbase-examples-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-server-0.96.1.2.0.6.1-101-hadoop2.jar
    htrace-core-2.01.jar
    hbase-hadoop2-compat-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-server-0.96.1.2.0.6.1-101-hadoop2-tests.jar
    zookeeper-3.4.5.2.0.6.0-101.jar
    hbase-hadoop-compat-0.96.1.2.0.6.1-101-hadoop2.jar
    hbase-shell-0.96.1.2.0.6.1-101-hadoop2.jar

    Best regards,
    Sergey Sukharev

    Collapse
    #48927

    Dave
    Moderator

    Hi Sergey,

    Can you run the command:

    yarn logs -applicationID application_1392703920316_0011 >> yarnlogs.log

    Can you post any errors you see in the yarnlogs.log file?

    Thanks

    Dave

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)