Hive / HCatalog Forum

Protocole to connect to Hive/HCatalog

  • #50636
    Gwenael Le Barzic
    Participant

    Hello there.

    I create this topic because I would like to know which protocole is used in the ODBC driver for Hive ? Is it just HTTP or is it something else ?
    http://s3.amazonaws.com/public-repo-1.hortonworks.com/index.html#/HDP/hive-odbc

    I read somewhere that it is possible to execute HQL with the following ways :
    – Directly in command lines in the Hadoop cluster (on each node with at least the client Hive installed ?)
    – by using JDBC (Java Database Connectivity)
    – by using ODBC (Open Database Connectivity)
    – by using the client Hive Thrift Client

    Which protocole use each of this way ?

    I hope my questions makes sense.

    Best regards.

    Gwenael Le Barzic

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #50731
    Thejas Nair
    Moderator

    In case of JDBC/ODBC, it talks to hive server2 (or hive server1 in older versions) using thrift binary api (thrift protocol over tcp). There is an option to use thrift over http , and its support has been greatly improved in upcoming hive 0.13 release. I believe the http mode is supported since hive 0.12.

    In case of hive commandline, there is no rpc to the hive server2. It runs the hive parser/optimizer/scheduler etc within the same have commandline jvm.
    Hope this answers your question.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.