HDP on Windows – Installation Forum

Memory Limit reached

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #36737
    Alex Martinez
    Participant

    Below is the text I am referencing: (although its for the windows deployment, I was assuming the variables would be the same for the Linux deployment)

    Known Issues for Hive
    • Mapreduce task from Hive dynamic partitioning query is killed.
    Problem: When using the Hive script to create and populate the partitioned table
    dynamically, the following error is reported in the TaskTracker log file:
    TaskTree [pid=30275,tipID=attempt_201305041854_0350_m_000000_0]
    is running beyond memory-limits. Current usage : 1619562496bytes.
    Limit : 1610612736bytes. Killing task. TaskTree [pid=30275,tipID=
    attempt_201305041854_0350_m_000000_0] is running beyond memory-limits.
    Current usage : 1619562496bytes. Limit : 1610612736bytes. Killing task.
    Dump of the process-tree for attempt_201305041854_0350_m_000000_0 : |-
    PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS)
    VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 30275 20786 30275
    30275 (java) 2179 476 1619562496 190241 /usr/jdk64/jdk1.6.0_31/jre/bin/
    java …
    Workaround: Disable all the memory settings by setting the value of the following
    perperties to -1 in the mapred-site.xml file on the JobTracker and TaskTracker host
    machines in your cluster:
    mapred.cluster.map.memory.mb = -1
    mapred.cluster.reduce.memory.mb = -1
    mapred.job.map.memory.mb = -1
    mapred.job.reduce.memory.mb = -1
    mapred.cluster.max.map.memory.mb = -1
    mapred.cluster.max.reduce.memory.mb = -1
    To change these values using the UI, use the instructions provided here to update these
    properties.

    #38972
    Seth Lyubich
    Moderator

    Hi Alex,

    To change these setting on HDP for Windows you will need to restart JobTracker. The Web UI link that documentation is referring to is for Ambari, which currently runs on Linux only.

    For linux distribution you will need to bounce JobTracker. If your cluster is running Ambari you can bounce MapReduce services.

    Also, here are release notes for Linux:
    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.2/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.3.2_5_hive.html

    Thanks for bringing this up and let me know if this is helpful.

    Seth

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.