The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Sqoop Forum

java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z

  • #59524
    Carsten Piepel

    I am using HDP for Windows. When attempting to import data from a Netezza appliance using Sqoop, the MapReduce job fails with the Netezza JDBC error shown below:

    2014-08-29 14:24:00,467 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3d989dea
    2014-08-29 14:24:00,807 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: "SITE_ID" >= 1 AND "SITE_ID" < 100413
    2014-08-29 14:24:00,860 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Working on split: "SITE_ID" >= 1 AND "SITE_ID" < 100413
    2014-08-29 14:24:00,923 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Executing query: SELECT "VP_ID", "BR_ID", "ACCOUNT_ID", "SITE_ID", "X", "Y" FROM "BR_SITE_LOCATION_V" AS "BR_SITE_LOCATION_V" WHERE ( "SITE_ID" >= 1 ) AND ( "SITE_ID" < 100413 )
    2014-08-29 14:24:19,104 INFO [Thread-11] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
    2014-08-29 14:24:19,155 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z
    at org.apache.sqoop.mapreduce.db.DBRecordReader.close(
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(
    at org.apache.hadoop.mapred.MapTask.closeQuietly(
    at org.apache.hadoop.mapred.MapTask.runNewMapper(
    at org.apache.hadoop.mapred.YarnChild$
    at Method)
    at org.apache.hadoop.mapred.YarnChild.main(

    I use the following import command:

    sqoop import --username map -P --connect jdbc:netezza://nzhost:5480/nzdb --table BR_SITE_LOCATION_V --split-by SITE_ID --target-dir /user/coz323/br-site --verbose

    I’ve tried the Netezza JDBC drivers versions 5.0 and 7.0 but the error is the same for both. Using the –direct option also does not make a difference. I am confident Sqoop is using the correct connection manager as I see the following lines logged to the console:

    manager.DefaultManagerFactory: Trying with scheme: jdbc:netezza:
    sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.NetezzaManager@3595b750

    Any ideas or suggestions would be highly appreciated. Thanks.

  • Author
  • #59539

    The root cause is described in SQOOP-1279. I think the product does not have the right Sqoop version. Can you get the later version of HDP and try


    Carsten Piepel

    Thanks, Venkat. I will give that a try and report back.

    Carsten Piepel

    Thanks, that did it: I installed Hortonworks Data Platform for Windows and the sqoop import from Netezza is working now.

The topic ‘java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.