The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Windows – Installation Forum

Memory Limit reached

  • Author
  • #36737
    Alex Martinez

    Below is the text I am referencing: (although its for the windows deployment, I was assuming the variables would be the same for the Linux deployment)

    Known Issues for Hive
    • Mapreduce task from Hive dynamic partitioning query is killed.
    Problem: When using the Hive script to create and populate the partitioned table
    dynamically, the following error is reported in the TaskTracker log file:
    TaskTree [pid=30275,tipID=attempt_201305041854_0350_m_000000_0]
    is running beyond memory-limits. Current usage : 1619562496bytes.
    Limit : 1610612736bytes. Killing task. TaskTree [pid=30275,tipID=
    attempt_201305041854_0350_m_000000_0] is running beyond memory-limits.
    Current usage : 1619562496bytes. Limit : 1610612736bytes. Killing task.
    Dump of the process-tree for attempt_201305041854_0350_m_000000_0 : |-
    30275 (java) 2179 476 1619562496 190241 /usr/jdk64/jdk1.6.0_31/jre/bin/
    java …
    Workaround: Disable all the memory settings by setting the value of the following
    perperties to -1 in the mapred-site.xml file on the JobTracker and TaskTracker host
    machines in your cluster: = -1
    mapred.cluster.reduce.memory.mb = -1 = -1
    mapred.job.reduce.memory.mb = -1 = -1
    mapred.cluster.max.reduce.memory.mb = -1
    To change these values using the UI, use the instructions provided here to update these

    Seth Lyubich

    Hi Alex,

    To change these setting on HDP for Windows you will need to restart JobTracker. The Web UI link that documentation is referring to is for Ambari, which currently runs on Linux only.

    For linux distribution you will need to bounce JobTracker. If your cluster is running Ambari you can bounce MapReduce services.

    Also, here are release notes for Linux:

    Thanks for bringing this up and let me know if this is helpful.


The forum ‘HDP on Windows – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.