Home Forums HDP on Windows – Installation Memory Limit reached

This topic contains 2 replies, has 2 voices, and was last updated by  Seth Lyubich 1 year, 2 months ago.

  • Creator
    Topic
  • #36672

    Alex Martinez
    Participant

    Hello,
    In your documentation – http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-Win-1.3.0/bk_releasenotes_HDP-Win/bk_releasenotes_HDP-Win-20130813.pdf
    there is a reference on section 1.5.2 to a link (click here) that is redirecting to the steps to make the necessary modifications to the mapred-site.xml via the Ambari GUI, but the link does not work. Can you please point me to the new location. (I want to make sure I understand what services have to be down or restarted in order for the change to take effect without impacting the integrity of the cluster.) I am currently on 1.3.2.

    Regards,
    Alex

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #38972

    Seth Lyubich
    Keymaster

    Hi Alex,

    To change these setting on HDP for Windows you will need to restart JobTracker. The Web UI link that documentation is referring to is for Ambari, which currently runs on Linux only.

    For linux distribution you will need to bounce JobTracker. If your cluster is running Ambari you can bounce MapReduce services.

    Also, here are release notes for Linux:

    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.2/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.3.2_5_hive.html

    Thanks for bringing this up and let me know if this is helpful.

    Seth

    Collapse
    #36737

    Alex Martinez
    Participant

    Below is the text I am referencing: (although its for the windows deployment, I was assuming the variables would be the same for the Linux deployment)

    Known Issues for Hive
    • Mapreduce task from Hive dynamic partitioning query is killed.
    Problem: When using the Hive script to create and populate the partitioned table
    dynamically, the following error is reported in the TaskTracker log file:
    TaskTree [pid=30275,tipID=attempt_201305041854_0350_m_000000_0]
    is running beyond memory-limits. Current usage : 1619562496bytes.
    Limit : 1610612736bytes. Killing task. TaskTree [pid=30275,tipID=
    attempt_201305041854_0350_m_000000_0] is running beyond memory-limits.
    Current usage : 1619562496bytes. Limit : 1610612736bytes. Killing task.
    Dump of the process-tree for attempt_201305041854_0350_m_000000_0 : |-
    PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS)
    VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 30275 20786 30275
    30275 (java) 2179 476 1619562496 190241 /usr/jdk64/jdk1.6.0_31/jre/bin/
    java …
    Workaround: Disable all the memory settings by setting the value of the following
    perperties to -1 in the mapred-site.xml file on the JobTracker and TaskTracker host
    machines in your cluster:
    mapred.cluster.map.memory.mb = -1
    mapred.cluster.reduce.memory.mb = -1
    mapred.job.map.memory.mb = -1
    mapred.job.reduce.memory.mb = -1
    mapred.cluster.max.map.memory.mb = -1
    mapred.cluster.max.reduce.memory.mb = -1
    To change these values using the UI, use the instructions provided here to update these
    properties.

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)