The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Hive / HCatalog Forum

Column length using VARCHAR(n)

  • #43967
    ckran
    Participant

    In Hive 0.12.0 it’s now possible to create tables with columns identified as VARCHAR(n) where n indicates the maximum length of the column. However when using the JDBC driver the getColumns call shows zero length for such columns. Or when using beeline the !columns command doesn’t show a value for COLUM_SIZE. Is that something that can be addressed?

    The Jira that added this functionality https://issues.apache.org/jira/browse/HIVE-5209 implies that it should work in the comment “Support returning varchar length in result set metadata.”

  • Author
    Replies
  • #43973
    Jason Dere
    Moderator

    Thanks for pointing that out – had updated the ResultSetMetadata (returned during queries), but had neglected to update the getColumns() call. I’ve opened https://issues.apache.org/jira/browse/HIVE-5847.

    #44064
    ckran
    Participant

    Thanks, and I see that there is patch. That was fast! It looks to me like I should be able to apply it with

    svn co http://svn.apache.org/repos/asf/hive/trunk hive
    cd hive
    export HIVE_HOME=/{{pwd}}
    export PATH=$HIVE_HOME/bin:$PATH

    wget https://issues.apache.org/jira/secure/attachment/12614544/HIVE-5847.1.patch
    patch -p0 < HIVE-5847.1.patch

    ant clean package

    And indeed the patch is successfully applied. But the build fails because files are missing from the root:
    build.xml – build-offline.xml – build-common.xml – build.properties
    As well as directories ivy, ant and build.xml from these and other directories.

The forum ‘Hive / HCatalog’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.