The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Hortonworks Sandbox Forum

How to get started with Command line

  • #34190
    Roshan Amasa

    Hello all,

    I just started with Sandbox and I had couple of queries . I don’t know this is the right place to ask.
    But the questions seems to be simple , can i run Mapreduce jobs via command line .

    When I go through some of the blogs by entering as User – root and Pwd – hadoop I am successfully logging in . but from then when I use list command ls , it says I have only ambari file which is strange.

    can you help me how to go to Home folder from shell and other folders. I did try cd command , but no luck


    Roshan A

  • Author
  • #34247

    I’m not familiar enough with Mapreduce to tell you exactly how to run the job via command line. The short answer though is yes, you should be able to do everything via the command line, including Mapreduce. There’s an MR tutorial here ( and the Hadoop commands here ( that should help you get started. If anyone tells you different, they probably know more than me, listen to them 😛

    As for the other questions, most of them seem to be generic *nix command systems. Remember, the Sandbox is just a VM image with everything preinstalled on a CentOS client. Any command that works on CentOS (or Red Hat/RHEL, or Fedora) should work on the sandbox. Generally goggling something like “how to change directories in Red Hat” will bring up a page.

    For your specific questions, commands are (everything inside the ‘ ‘, but not including ‘)
    ‘cd ~’ will bring you back to your home folder, which should be /root
    / is the very bottom directory in the unix file system, kinda like C:/ in windows, so /root would be the equivalent of C:/users/root
    When you use ‘ls’ it only tells you the contents of the directory you are in, since you’re in your user folder, you only see the ambari file. You can also use a command like ‘ls /var’ to see all files and folders in the /var directory. ‘ls -al’ gives you more details if you ever need it. Or you can just change to a different directory and run ‘ls’ there

    So if you want to navigate to another folder (change directory), use something like ‘cd /etc/flume/conf’ and you’ll go to the flume config directory.

    Also, most commands have help options. Sometimes this varies? in my experience anyway. Generally though something like ‘yum help’ will bring up the help text for yum. ‘man yum’ also brings up the manual page for yum. If you get it wrong, like ‘yum -?’ it’ll generally tell you there’s no such option as -? and give you a list of possible commands.

The forum ‘Hortonworks Sandbox’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.