Sqoop Forum


  • #42774
    Nick Martin

    When I run this:
    sqoop import –hive-import –hive-table SCHEMA.TABLE –target-dir SCHEMA.TABLE –query “SELECT COLUMN_ID FROM (SELECT * FROM SCHEMA.TABLE ORDER BY COLUMN_ID ASC) WHERE ROWNUM<=10 AND $CONDITIONS" –connect jdbc:oracle:thin:@xxx.xxx.xxx:0000/SCHEMA –username user –password pw –hive-drop-import-delims -m1 –hive-overwrite

    I get a Hive table with ten rows of the number 1-10 (Double datatype in Hive). Exactly what I expect.

    When I run this:
    sqoop export -m8 –table SCHEMA.TABLE –export-dir /apps/hive/warehouse/schema.db/table –connect jdbc:oracle:thin:@xxx.xxx.xxx.com:0000/SCHEMA –username user –password pw –verbose

    Oracle source & target column is NUMBER

    I get this (TT):
    2013-11-04 09:30:50,427 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
    2013-11-04 09:30:50,736 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
    2013-11-04 09:30:50,739 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a0da90c
    2013-11-04 09:30:50,871 INFO org.apache.hadoop.mapred.MapTask: Processing split: Paths:/apps/hive/warehouse/schema.db/table_info/part-m-00000:0+2
    2013-11-04 09:30:53,472 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader: Loaded native gpl library
    2013-11-04 09:30:53,473 INFO com.hadoop.compression.lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
    2013-11-04 09:30:53,476 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
    2013-11-04 09:30:53,476 INFO org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library loaded
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
    2013-11-04 09:30:53,490 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception:
    at java.util.AbstractList$Itr.next(AbstractList.java:350)
    at TABLE_INFO.__loadFromFields(TABLE_INFO.java:827)
    at TABLE_INFO.parse(TABLE_INFO.java:776)
    at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
    at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
    at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:363)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

to create new topics or reply. | New User Registration

  • Author
  • #42778
    Nick Martin

    Should’ve mentioned we’re on Sqoop 1.4.3

    Koelli Mungee

    Hi Nick,

    Is this issue specific to this table/data, have you tried with different tables?


    Carter Shanklin

    Hi Nick,

    Is that unexpected? Per http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements001.htm#i54335 I see that ANSI DOUBLE PRECISION is mapped to NUMBER in Oracle datatypes. I’m not a big Oracle expert admittedly. What did you expect the type to be?

    Nick Martin

    Sorry for the wild goose chase. Somebody made a mod to some DDL on the Oracle side and didn’t update docs :

    Thanks folks!

The topic ‘Oracle/Sqoop’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.