The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Pig Forum

How to limit the number of concurrent jobs that pig script starts

  • #46181

    Hi,

    I am trying to merge few files, remove duplicates and store them using following macros:

    DEFINE mergeDateDimension(validDataSet, dimensionFieldName, previousDimensionFile) RETURNS merged {
    dates = FOREACH $validDataSet GENERATE $dimensionFieldName;
    oldDimensions = LOAD ‘$previousDimensionFile’ USING PigStorage(‘|’) AS (
    id:LONG,
    monthName:CHARARRAY,
    monthId:INT,
    year:INT,
    fiscalYear:INT,
    originalDate:CHARARRAY);
    oldOriginalDates = FOREACH oldDimensions GENERATE originalDate;
    allDates = UNION dates, oldOriginalDates;
    uniqueDates = DISTINCT allDates;
    $merged = FOREACH uniqueDates GENERATE toDateDimension($0);
    };

    I call this macros four times in my script:

    billDateDim = mergeDateDimension(validData, BillDate, ‘$atbPrevOutputBase/dimensions/$billDateDimensionName’);
    STORE billDateDim INTO ‘$atbOutputBase/dimensions/$billDateDimensionName’;

    admissionDateDim = mergeDateDimension(validData, AdmissionDate, ‘$atbPrevOutputBase/dimensions/$admissionDateDimensionName’);
    STORE admissionDateDim INTO ‘$atbOutputBase/dimensions/$admissionDateDimensionName’;

    dischDateDim = mergeDateDimension(validData, DischargeDate, ‘$atbPrevOutputBase/dimensions/$dischargeDateDimensionName’);
    STORE dischDateDim INTO ‘$atbOutputBase/dimensions/$dischargeDateDimensionName’;

    arPostDateDim = mergeDateDimension(validData, PeriodDate, ‘$atbPrevOutputBase/dimensions/$arPostDateDimensionName’);
    STORE arPostDateDim INTO ‘$atbOutputBase/dimensions/$arPostDateDimensionName’;

    When I run script in sandbox, it starts four parallel map-reduce jobs and they get stuck.
    But if I remove two last lines and run script – everything works fine (i.e. three jobs successfully complete).

    So I am wondering if it is possible to limit number of concurrent jobs (not map/reduce tasks)?

  • Author
    Replies
  • #46780
    Jianyong Dai
    Moderator

    You can put “exec” keyword into Pig script to manually create a execution boundary.

    billDateDim = mergeDateDimension(validData, BillDate, ‘$atbPrevOutputBase/dimensions/$billDateDimensionName’);
    STORE billDateDim INTO ‘$atbOutputBase/dimensions/$billDateDimensionName’;

    admissionDateDim = mergeDateDimension(validData, AdmissionDate, ‘$atbPrevOutputBase/dimensions/$admissionDateDimensionName’);
    STORE admissionDateDim INTO ‘$atbOutputBase/dimensions/$admissionDateDimensionName’;

    exec

    dischDateDim = mergeDateDimension(validData, DischargeDate, ‘$atbPrevOutputBase/dimensions/$dischargeDateDimensionName’);
    STORE dischDateDim INTO ‘$atbOutputBase/dimensions/$dischargeDateDimensionName’;

    arPostDateDim = mergeDateDimension(validData, PeriodDate, ‘$atbPrevOutputBase/dimensions/$arPostDateDimensionName’);
    STORE arPostDateDim INTO ‘$atbOutputBase/dimensions/$arPostDateDimensionName’;

    exec
    ……

The forum ‘Pig’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.