The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Sqoop Forum


  • #42774
    Nick Martin

    When I run this:
    sqoop import –hive-import –hive-table SCHEMA.TABLE –target-dir SCHEMA.TABLE –query “SELECT COLUMN_ID FROM (SELECT * FROM SCHEMA.TABLE ORDER BY COLUMN_ID ASC) WHERE ROWNUM<=10 AND $CONDITIONS" –connect –username user –password pw –hive-drop-import-delims -m1 –hive-overwrite

    I get a Hive table with ten rows of the number 1-10 (Double datatype in Hive). Exactly what I expect.

    When I run this:
    sqoop export -m8 –table SCHEMA.TABLE –export-dir /apps/hive/warehouse/schema.db/table –connect –username user –password pw –verbose

    Oracle source & target column is NUMBER

    I get this (TT):
    2013-11-04 09:30:50,427 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
    2013-11-04 09:30:50,736 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
    2013-11-04 09:30:50,739 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a0da90c
    2013-11-04 09:30:50,871 INFO org.apache.hadoop.mapred.MapTask: Processing split: Paths:/apps/hive/warehouse/schema.db/table_info/part-m-00000:0+2
    2013-11-04 09:30:53,472 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader: Loaded native gpl library
    2013-11-04 09:30:53,473 INFO com.hadoop.compression.lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
    2013-11-04 09:30:53,476 WARN Snappy native library is available
    2013-11-04 09:30:53,476 INFO Snappy native library loaded
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception:
    at java.util.AbstractList$
    at TABLE_INFO.__loadFromFields(
    at TABLE_INFO.parse(
    at org.apache.hadoop.mapred.MapTask.runNewMapper(
    at org.apache.hadoop.mapred.Child$
    at Method)
    at org.apache.hadoop.mapred.Child.main(

  • Author
  • #42778
    Nick Martin

    Should’ve mentioned we’re on Sqoop 1.4.3

    Koelli Mungee

    Hi Nick,

    Is this issue specific to this table/data, have you tried with different tables?


    Carter Shanklin

    Hi Nick,

    Is that unexpected? Per I see that ANSI DOUBLE PRECISION is mapped to NUMBER in Oracle datatypes. I’m not a big Oracle expert admittedly. What did you expect the type to be?

    Nick Martin

    Sorry for the wild goose chase. Somebody made a mod to some DDL on the Oracle side and didn’t update docs :

    Thanks folks!

The topic ‘Oracle/Sqoop’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.