Hortonworks Sandbox Forum

Can't change Sandbox Memory

  • #50041
    David Stevenson


    I’m running the sandbox v2.0 in Virtualbox using the .ova file imported to Virtualbox, with no modifications.

    When I try to run some more complex queries using Hive, I get an out-of-memory error in the reduce phase. I have tried to change several of the relevant settings in mapred-site.xml, however this does not resolve the problem, and indeed the error message stays the same, saying “Container xxxxxx has run out physical memory, used 579MB out of 512MB” or similar, despite having increased the reduce memory in a couple of places in mapreduce-site.xml to 2048 i.e. it looks as though changing mapred-site.xml in /etc/hadoop/conf and rebooting doesn’t actually change these settings for me.

    So three questions:
    1. Is there more than one place where mapred-site.xml is located.
    2. If so, which one should I modify.
    3. Any other reasons why the sandbox won’t pick up new settings when I reboot the VM?

    If there isn’t an obvious solution, I’ll post the mapred-site.xml file and the actual error trace for comment.

    Any help would be much appreciated,
    Many thanks,

to create new topics or reply. | New User Registration

  • Author
  • #50042

    Hi Stevod,

    The memory settings for the sandbox are controlled through Ambari.
    You must make the changes in Ambari and restart the services. Please ensure that you have allocated enough memory to the Virtual Machine itself in VirtualBox (as it is 4GB as standard)
    You must run the following commands:
    ambari-server start
    ambari-agent start

    login using admin/admin at

    From here you can change the memory settings for the various services as Ambari will overwrite any changes you make to the XML files directly.



    David Stevenson

    Perfect, thank you.

The topic ‘Can't change Sandbox Memory’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.