Hive / HCatalog Forum

Not able to access Hive Tables through JDBC connection.

  • #50000
    Siva Prakash
    Participant

    Hi I am using Hortonworks Sandbox 2.0 . i tried the following program in Eclipse IDE. not table to access hive tables. got the following errors. what i have to do.?
    i used this also : hive –service hiveserver.
    not able to connect.
    ——————–
    import java.sql.SQLException;
    import java.sql.Connection;
    import java.sql.ResultSet;
    import java.sql.Statement;
    import java.sql.DriverManager;

    public class HiveJdbcClient {
    private static String driverName = “org.apache.hadoop.hive.jdbc.HiveDriver”;

    /**
    * @param args
    * @throws SQLException
    */
    public static void main(String[] args) throws SQLException {
    try {
    Class.forName(driverName);
    } catch (ClassNotFoundException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    System.exit(1);
    }
    Connection con = DriverManager.getConnection(“jdbc:hive://localhost:10000/default”, “”, “”);
    Statement stmt = con.createStatement();
    String tableName = “testHiveDriverTable”;
    stmt.executeQuery(“drop table ” + tableName);
    ResultSet res = stmt.executeQuery(“create table ” + tableName + ” (key int, value string)”);
    // show tables
    String sql = “show tables ‘” + tableName + “‘”;
    System.out.println(“Running: ” + sql);
    res = stmt.executeQuery(sql);
    if (res.next()) {
    System.out.println(res.getString(1));
    }
    // describe table
    sql = “describe ” + tableName;
    System.out.println(“Running: ” + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
    System.out.println(res.getString(1) + “\t” + res.getString(2));
    }

    // load data into table
    // NOTE: filepath has to be local to the hive server
    // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
    String filepath = “/tmp/a.txt”;
    sql = “load data local inpath ‘” + filepath + “‘ into table ” + tableName;
    System.out.println(“Running: ” + sql);
    res = stmt.executeQuery(sql);

    // select * query
    sql = “select * from ” + tableName;
    System.out.println(“Running: ” + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
    System.out.println(String.valueOf(res.getInt(1)) + “\t” + res.getString(2));
    }

    // regular hive query
    sql = “select count(1) from ” + tableName;
    System.out.println(“Running: ” + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
    System.out.println(res.getString(1));
    }
    }
    }
    —————-
    GOT ERROR
    —–
    Exception in thread “main” java.sql.SQLException: Could not establish connection to 172.31.153.71:10000/default: java.net.ConnectException: Connection refused: connect
    at org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:117)
    at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:106)
    at java.sql.DriverManager.getConnection(DriverManager.java:582)
    at java.sql.DriverManager.getConnection(DriverManager.java:185)
    at com.coe.convert.hive.temp.htw.HiveJdbcClient.main(HiveJdbcClient.java:28)

to create new topics or reply. | New User Registration

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.