Hortonworks Sandbox Forum

webloganalytics query not working

  • #31253

    I get an internal error processing every time I try to run this query. The Omniture query worked fine but this one does now.

    Please Help!

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #31262
    Sasha J
    Moderator

    More details needed.

    #31270

    I am attempting to complete Tutorial 6: Loading Data into the Hortonworks Sandbox. The final step is to create a query in Hive as follows:
    create table webloganalytics as

    select

    to_date(o.ts) logdate,

    o.url,

    o.ip,

    o.city,

    upper(o.state) state,

    o.country,

    p.category,

    CAST(datediff(

    from_unixtime( unix_timestamp() ),

    from_unixtime( unix_timestamp(u.birth_dt, ‘dd-MMM-yy’))) / 365 AS INT) age,

    u.gender_cd gender

    from

    omniture o

    inner join products p on o.url = p.url

    left outer join users u on o.swid = concat(‘{‘, u.swid , ‘}’)

    When I copy and paste that code into the editor as instructed via the tool, I get an Error page stating ‘Internal error processing query’ with a BACK button. This query is supposed to create a table that I can then Analyze in Excel during Tutorial 9.

    Please let me know if this is still not clear.

    #31273

    I now get the following error:
    Your query has the following error(s):

    OK FAILED: SemanticException [Error 10002]: Line 31:53 Invalid column reference ‘url’

    click the Error Log tab below for details

    There is something clearly wrong with this query from the Tutorial but I do not know what. Can someone please help me??

    #31280

    Issue resolved; please close chat.

    #37808
    Jaymin Shah
    Member

    I had similar issue.

    I had tried to delete the view omniture but system failed to delete with error code 500 therefore restarted VM.

    After restarting VM all started working.

    hope this would help someone.

    #49144
    Mayank Rathi
    Participant

    I am getting same error while working on tutorial 106 – Loading Data into the Hortonworks Sandbox.

    Here is the query , that I am trying to run :

    create table webloganalytics as
    select
    to_date(o.ts) logdate,
    o.url,
    o.ip,
    o.city,
    upper(o.state) state,
    o.country,
    p.category,
    CAST(datediff(
    from_unixtime( unix_timestamp() ),
    from_unixtime( unix_timestamp(u.birth_dt, ‘dd-MMM-yy’))) / 365 AS INT) age,
    u.gender_cd gender
    from
    omniture o
    inner join products p on o.url = p.url
    left outer join users u on o.swid = concat(‘{‘, u.swid , ‘}’)

    Here is the error , I am getting :

    Error occurred executing hive query: Unknown exception.
    View Logs

    This is what logs say :

    [23/Feb/2014 07:54:46 +0000] access WARNING 10.0.2.2 hue – “GET /logs HTTP/1.0″

    [23/Feb/2014 07:54:23 +0000] middleware INFO Processing exception: Error occurred executing hive query: Unknown exception.: Traceback (most recent call last):
    File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py”, line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File “/usr/lib/hue/apps/beeswax/src/beeswax/views.py”, line 554, in execute_query
    return execute_directly(request, query, query_server, design, on_success_url=on_success_url, download=download)
    File “/usr/lib/hue/apps/beeswax/src/beeswax/views.py”, line 1242, in execute_directly
    raise PopupException(_(‘Error occurred executing hive query: ‘ + error_message))
    PopupException: Error occurred executing hive query: Unknown exception.

    [23/Feb/2014 07:54:23 +0000] thrift_util DEBUG Thrift call <class ‘beeswaxd.BeeswaxService.Client’>.get_log returned in 1ms: “14/02/23 07:54:23 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive\n14/02/23 07:54:23 INFO ql.Driver: <PERFLOG method=Driver.run>\n14/02/23 07:54:23 INFO ql.Driver: <PERFLOG method=TimeToSubmit>\n14/02/23 07:54:23 INFO ql.Driver: <PERFLOG method=compile>\n14/02/23 07:54:23 INFO ql.Driver: <PERFLOG method=parse>\n14/02/23 07:54:23 INFO parse.ParseDriver: Parsing command: use default\n14/02/23 07:54:23 INFO parse.ParseDriver: Parse Completed\n14/02/23 07:54:23 INFO ql.Driver: </PERFLOG method=parse start=1393170863507 end=1393170863508 duration=1>\n14/02/23 07:54:23 INFO ql.Driver: <PERFLOG method=semanticAnalyze>\n14/02/23 07:54:23 INFO ql.Driver: Semantic Analysis Completed\n14/02/23 07:54:23 INFO ql.Driver: </PERFLOG method=semanticAnalyze start=1393170863508 end=1393170863509 duration=1>\n14/02/23 07:54:23 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)\n14/02/…

    [23/Feb/2014 07:54:23 +0000] thrift_util DEBUG Thrift call: <class ‘beeswaxd.BeeswaxService.Client’>.get_log(args=(‘973c0d4e-7aa3-43a4-9272-6ca087074

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.