The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Hive / HCatalog Forum

using user-supplied jars

  • #28479
    Andrew Hume

    we are running a jdbc query that uses an .jar we have.
    we installed the jar file in the same path on all nodes in our cluster,
    and added a property to the hive hive-site.xml file:

    available for one and all

    we pushed this file out to all nodes, and then stopped/restarted hive.

    after all that, the query still can’t find teh jar file (or more exactly, a class
    defined in that jar file).

    what did i do wrong?
    (and is this documented anywhere?)

  • Author
  • #28480
    Andrew Hume

    somehow the wrteched editor corrupted teh property.
    here it is again:

    available for one and all

    Andrew Hume

    bollocks! i don’t know how to escap teh xml so i’ll set off teh field names

    description=available for one and all

    Seth Lyubich

    Hi Andrew,

    I think you can try few things:

    1. From Hive shell add your jar with command ‘add jar /path/to/jar’.
    2. Run Hive command specifying aux jar with something like below:
    hive –auxpath /path-to-/hive-examples.jar

    Hope this helps,


    Andrew Hume

    the hive query works; its just the JDBC query that doesn’t find it.

    Andrew Hume

    after putting some debugging and being reminded of how moronic the java jar paths thing is,
    i decided to brute force it by adding
    near the top of

    problem solved.


    Hi Andrew,

    Thanks for letting us know that you solved the problem and how you did it.


The forum ‘Hive / HCatalog’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.