Hive / HCatalog Forum

Custom File format for HCatalog

  • #28836
    Subroto Sanyal
    Participant

    Hi,

    Newbie question…
    I have my own file format. The files are saved on HDFS. I would like HCatalog to facilitate to read those files by Hive.
    Something like:

    Hive/MapReduce
    |
    HCatalog
    |
    MyFiles

    Where should I start with?
    Is there any sample integration of other File formats which I can use a reference?
    or simply: Is there any documentation or implementation to create a custom StorageHandler and how to use it?

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #28872
    tedr
    Moderator

    Hi Subroto,

    Yo should look into creating a udf for reading/interpretting you file format.

    Thanks,
    Ted.

    #28977
    Subroto Sanyal
    Participant

    Hi Ted,

    Is there anyway to develop a custom input-format or SerDe to achieve this?
    I think Hive achieves this using custom input-format and SerDe.
    If it can be achieved by Custom Input-Format; then how can we integrate the InputFormat to HCatalog??

    Cheers,
    Subroto Sanyal

    #30845
    Akki Sharma
    Moderator

    Hi Subroto,

    The closest documentation I could find is “https://cwiki.apache.org/confluence/display/Hive/StorageHandlers”.

    You can also download the Hive code from “http://apache.mirrors.pair.com/hive/hive-0.11.0/”, look at the code in “./src/hcatalog/storage-handlers/hbase” and try to copy the HBase code to write your custom StorageHandler in Hive.

    Best Regards,
    Akki

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.