Sqoop Forum

Trouble with Avro source in Flume

  • #33349
    K S
    Participant

    Hi Guys…. Im new to hadoop and I am experimenting with Sandbox. I was able to insert data to HDFS, but in serialized format using the Avro source. How do I deserialize the data for further processing using pig/hive.

    Also any decent guide on the differences between the different sources like Thrift,Avro,NetCat and SysLog and their usage ?

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #33701
    Yi Zhang
    Moderator

    Hi K S,

    Hive has AvroSerde to process avro data. Have you tried it?

    Thanks,
    Yi

    #33791
    K S
    Participant

    Thanks for the reply Yi … I will take a look at AvroSerde. But, is it not true that Hive is only for structured data ?. What about unstructured data that can be used thriugh Pig ?.

    #49334
    Robert Molina
    Moderator

    HI KS,
    Here is a link for pig that might help:
    https://cwiki.apache.org/confluence/display/PIG/AvroStorage

    Also regarding the different sources, you can refer to :
    https://flume.apache.org/FlumeUserGuide.html

    Hope that helps.
    Regards,
    Robert

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.