HCatalog, Hive and Regex?

to create new topics or reply. | New User Registration

This topic contains 2 replies, has 2 voices, and was last updated by  Chad Weetman 1 year, 8 months ago.

  • Creator
    Topic
  • #43880

    Chad Weetman
    Member

    Hi there. I’m just starting to explore Hadoop/HDP and I’ve run into a wall.

    What I THINK I’m trying to do is upload a file into Hadoop, create a table using regex in HCatalog, import the file’s data into the new table and then run queries against it via Hive. Now, for all I know, none of that made any sense. But for the sake of this post, I’m going to operate on the assumption that what I’m trying to do is reasonable.

    First the good news…
    I uploaded the file with no problems using the File Browser.
    I was able to manually define a table in HCatalog using HCat and org.apache.hadoop.hive.contrib.serde2.RegexSerDe.
    I imported the file data into that table using Hive and “load data inpath into table ”
    I can see ALL the imported data via “select * from ” in Hive.
    I can also see all the table with the proper columns via HCat’s Browse Data feature.

    Now the bad news…
    When I try to view just a single column from my table in Hive with “select from ” I get an error.
    Looking at the Task Diagnostic Log in the Job Browser, this seems to be the root cause:
    Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.contrib.serde2.RegexSerDe

    I found some chatter on the interwebs suggesting that I need to “add jar” the hive-contrib jar file. I’ve tried typing the following into Hive before my select command:
    add jar /usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar;
    but just get this error:
    OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘jar’ ‘/’

    I then tried to use Hive’s Add File Resource controls but I can’t seem to get the path right. I click Add, select Jar as the Type and enter this as the Path:
    /usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar
    but all this does is produce:
    java.lang.RuntimeException: OK converting to local hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar Failed to read external resource hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar

    My only guess is that the hive-contrib jar lives on the actual file system of the VM running the sandbox but the Add File Resource mechanism is looking into Hadoop for files. But I’m not even sure that makes any sense.

    I’m pretty out of my element here and could totally use any and all help at this point.

    Thanks!

Viewing 2 replies - 1 through 2 (of 2 total)

You must be to reply to this topic. | Create Account

  • Author
    Replies
  • #43923

    Chad Weetman
    Member

    Hey thanks Dave! That hadoop command was the next missing link for me.

    Now I have the jar in HDFS and adding it using the GUI for adding “File Resources” (in the Hive portion of the web UI) allows my column-specific select to work. However, I still can’t get an “add jar” command to work as part of my HQL.

    This:
    add jar /tmp/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar;
    select from ;

    produces this error:
    OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘JAR’ ‘/’

    Any thoughts on that?

    Thanks,
    Chad

    Collapse
    #43910

    Dave
    Moderator

    Hi Chad,

    Are you running the query from within Hue (the web UI) or directly on the command line?
    When adding the jar you need to specify the location in HDFS, which will mean you need to run hadoop fs -put
    I would suggest using /tmp (as it is world readable)

    Let me know how you get on,

    Thanks

    Dave

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.