Hive / HCatalog Forum

BeeswaxException when creating table from twitter API

  • #44961
    Wenzhao Li

    I followed the tutorial 13 in Windows 7:
    when I went to step 4 to browse data in tweets_raw, I recieved following:
    Could not read table
    BeeswaxException(handle=QueryHandle(log_context=’ae18ae74-518f-400b-b4b0-d399ed78e194′, id=’ae18ae74-518f-400b-b4b0-d399ed78e194′), log_context=’ae18ae74-518f-400b-b4b0-d399ed78e194′, SQLState=’ ‘, _message=None, errorCode=0)

    and when I click on the “tweets_raw”, I recieved:
    Error getting table description

    Traceback (most recent call last): File “/usr/lib/hue/apps/hcatalog/src/hcatalog/”, line 145, in describe_table table_desc_extended = HCatClient(request.user.username).describe_table_extended(table, db=database) File “/usr/lib/hue/apps/hcatalog/src/hcatalog/”, line 143, in describe_table_extended raise Exception(error) Exception: Could not get table description (extended): {“errorDetail”:”java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe does not exist)\n\tat org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(\n\tat org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(\n\tat org.apache.hadoop.hive.ql.metadata.Table.getCols(\n\tat org.apache.hadoop.hive.ql.metadata.Table.checkValidity(\n\tat org.apache.hadoop.hive.ql.metadata.Hive.getTable(\n\tat org.apache.hadoop.hive.ql.metadata.Hive.getTable(\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.showTableStatus(\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.execute(\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(\n\tat
    and other tables could not be established also.
    what can I do to create a table?

to create new topics or reply. | New User Registration

  • Author
  • #45048
    Rahul Dhond

    Hi WenZhao (or anyone else on this forum who can help),
    I am stuck at step 4 too. I am getting a Json Serde error of ‘file not found’. Maybe you have done that part correctly, so I want to ask you this. Where is the json-serde-1.1.6-SNAPSHOT-jar-with-dependencies file supposed to go? Do you keep it on /root ? I believe it goes in $/HIVE_HOME/lib..I have tried both but it didn’t work. Could you please let me know?

    By the way, I am using Sandbox 1.3.

    thanks & regards


    Hi Rahul,

    It seems they have missed to add one more point in the tutorial.
    Please copy “json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar” files also at the root folder where you have copied hiveddl.sql file.

    After you are done copying jar file at the root folder RUN following command on VM Console.

    hive -f hiveddl.sql

    Rajesh K Singh
    Aditi Technologies Pvt Limited.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.