Hortonworks Sandbox Forum

HCatalog, Hive and Regex?

  • #43880
    Chad Weetman

    Hi there. I’m just starting to explore Hadoop/HDP and I’ve run into a wall.

    What I THINK I’m trying to do is upload a file into Hadoop, create a table using regex in HCatalog, import the file’s data into the new table and then run queries against it via Hive. Now, for all I know, none of that made any sense. But for the sake of this post, I’m going to operate on the assumption that what I’m trying to do is reasonable.

    First the good news…
    I uploaded the file with no problems using the File Browser.
    I was able to manually define a table in HCatalog using HCat and org.apache.hadoop.hive.contrib.serde2.RegexSerDe.
    I imported the file data into that table using Hive and “load data inpath into table ”
    I can see ALL the imported data via “select * from ” in Hive.
    I can also see all the table with the proper columns via HCat’s Browse Data feature.

    Now the bad news…
    When I try to view just a single column from my table in Hive with “select from ” I get an error.
    Looking at the Task Diagnostic Log in the Job Browser, this seems to be the root cause:
    Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.contrib.serde2.RegexSerDe

    I found some chatter on the interwebs suggesting that I need to “add jar” the hive-contrib jar file. I’ve tried typing the following into Hive before my select command:
    add jar /usr/lib/hive/lib/hive-contrib-;
    but just get this error:
    OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘jar’ ‘/’

    I then tried to use Hive’s Add File Resource controls but I can’t seem to get the path right. I click Add, select Jar as the Type and enter this as the Path:
    but all this does is produce:
    java.lang.RuntimeException: OK converting to local hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib- Failed to read external resource hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib-

    My only guess is that the hive-contrib jar lives on the actual file system of the VM running the sandbox but the Add File Resource mechanism is looking into Hadoop for files. But I’m not even sure that makes any sense.

    I’m pretty out of my element here and could totally use any and all help at this point.


to create new topics or reply. | New User Registration

  • Author
  • #43910

    Hi Chad,

    Are you running the query from within Hue (the web UI) or directly on the command line?
    When adding the jar you need to specify the location in HDFS, which will mean you need to run hadoop fs -put
    I would suggest using /tmp (as it is world readable)

    Let me know how you get on,



    Chad Weetman

    Hey thanks Dave! That hadoop command was the next missing link for me.

    Now I have the jar in HDFS and adding it using the GUI for adding “File Resources” (in the Hive portion of the web UI) allows my column-specific select to work. However, I still can’t get an “add jar” command to work as part of my HQL.

    add jar /tmp/lib/hive/lib/hive-contrib-;
    select from ;

    produces this error:
    OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘JAR’ ‘/’

    Any thoughts on that?


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.