Queries slower with more mappers

to create new topics or reply. | New User Registration

This topic contains 1 reply, has 2 voices, and was last updated by  Carter Shanklin 1 year, 6 months ago.

  • Creator
    Topic
  • #47956

    Bill Smith
    Participant

    Our platform has a 40GB raw data file that was compressed lzo (12GB compressed) to reduce network IO between S3.
    Without indexing the file is unsplittable resulting in 1 map task and poor cluster utilisation.
    After indexing the file to be splitable the hive query produces 120 map tasks.
    However, with the 120 tasks distributed over a small 4 node cluster it takes longer to process the data than when it wasn’t splitable and processing done by a single node (1h20mins vs 17mins). This was with a fairly simple select from where query, without distinct, group by or order.
    I’d like to utilise all nodes in the cluster, to reduce query time. Whats the best way to have the data crunched in parallel but with fewer mappers?

Viewing 1 replies (of 1 total)

You must be to reply to this topic. | Create Account

  • Author
    Replies
  • #47987

    Carter Shanklin
    Participant

    Bill,

    As a first step I would probably try it against uncompressed text as a reference point.
    Second step I would try it against a splittable format like bzip2 or a read-optimized format like ORCFile (best performance here + high compression). We have heard from some users that ORCFile works well on S3 FWIW.

    I’m not sure what you mean by indexing. Indexing in Hive doesn’t necessarily work the way it works in other systems so this could explain the strange behavior you saw.

    (P.S. c24? Apologies if that makes no sense)

    Collapse
Viewing 1 replies (of 1 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.