HCatalog, Hive and Regex?

This topic contains 1 reply, has 1 voice, and was last updated by  Chad Weetman 1 year, 2 months ago.

  • Creator
    Topic
  • #43879

    Chad Weetman
    Member

    Hi there. I’m just starting to explore Hadoop/HDP and I’ve run into a wall.

    What I THINK I’m trying to do is upload a file into Hadoop, create a table using regex in HCatalog, import the file’s data into the new table and then run queries against it via Hive. Now, for all I know, none of that made any sense. But for the sake of this post, I’m going to operate on the assumption that what I’m trying to do is reasonable.

    First the good news…
    I uploaded the file with no problems using the File Browser.
    I was able to manually define a table in HCatalog using HCat and org.apache.hadoop.hive.contrib.serde2.RegexSerDe.
    I imported the file data into that table using Hive and “load data inpath into table ”
    I can see ALL the imported data via “select * from ” in Hive.
    I can also see all the table with the proper columns via HCat’s Browse Data feature.

    Now the bad news…
    When I try to view just a single column from my table in Hive with “select from ” I get an error.
    Looking at the Task Diagnostic Log in the Job Browser, this seems to be the root cause:
    Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.contrib.serde2.RegexSerDe

    I found some chatter on the interwebs suggesting that I need to “add jar” the hive-contrib jar file. I’ve tried typing the following into Hive before my select command:
    add jar /usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar;
    but just get this error:
    OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘jar’ ‘/’

    I then tried to use Hive’s Add File Resource controls but I can’t seem to get the path right. I click Add, select Jar as the Type and enter this as the Path:
    /usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar
    but all this does is produce:
    java.lang.RuntimeException: OK converting to local hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar Failed to read external resource hdfs://sandbox:8020/usr/lib/hive/lib/hive-contrib-0.11.0.1.3.0.0-107.jar

    My only guess is that the hive-contrib jar lives on the actual file system of the VM running the sandbox but the Add File Resource mechanism is looking into Hadoop for files. But I’m not even sure that makes any sense.

    I’m pretty out of my element here and could totally use any and all help at this point.

    Thanks!

Viewing 1 replies (of 1 total)

You must be to reply to this topic. | Create Account

  • Author
    Replies
  • #43881

    Chad Weetman
    Member

    Sorry, I thought I was posting that to the Sandbox forum. All of my work has been in the Sandbox thus far so this might not be an appropriate place for this discussion.

    Collapse
Viewing 1 replies (of 1 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.