HDP installation on Amazon Ec2

to create new topics or reply. | New User Registration

This topic contains 41 replies, has 4 voices, and was last updated by  Sasha J 2 years, 11 months ago.

Viewing 11 replies - 31 through 41 (of 41 total)

You must be to reply to this topic. | Create Account

  • Author
  • #7800

    What worked for me was the root key generated on my hmc deployment node with ssh-keygen.
    you can perform a get opertion after connecing to the node with sftp.

    blue@Stallion:~$ sftp -i blue.pem root@ec2-50-17-141-9.compute-1.amazonaws.com
    Connected to ec2-50-17-141-9.compute-1.amazonaws.com.
    sftp> get .ssh/id_rsa
    Fetching /root/.ssh/id_rsa to id_rsa
    /root/.ssh/id_rsa 100% 1675 1.6KB/s 00:00

    I have recently tried providing the amazon key, in my case blue.pem and since they set it up so that you need this private key to access your nodes it should work. Ill get back with you if its successful or not…


    kalyan reddy

    very vital information for beginers
    sorry to keep the doubt again…
    you should have a separate file with your root private key.——– is this file going to be the .ppk file which was created while creating the mazon instacne? (or) the public,private keypair which is id_rsa and id_rsa.pub.
    if these two are not then from where by using sftp i can downlaod?
    as i need to select the SSH Private Key File for root.



    Also, http://hortonworks.com/community/forums/topic/common-issues/
    and make sure your ec2 security groups allow access on the ports you need.
    Since I don’t care about security on my development cluster I opened everything ( )
    But this is obviously not good for production clusters. If anyone is good with security / networking it would be nice of you to post a minimal set.


    you should have a separate file with your root private key. If you are accessing the hmc gui from your local machine you can get both files with sftp. If you are doing a multi node install just add the fqdn of each node onto your host file & make sure each node has the same /etc/hosts file. If you run into issues, there is a good chance someone already ran into the same issue so check the forums.


    kalyan reddy

    Hi Miguel,
    many thanks for informatiion.
    finally i am seeing the HMC GUI.
    as a part of basic cluster setup i created the host file with single fqdn(internal dns).
    what file should have for SSH private key file?
    is it going to be differed if its single (or)multinode cluster?
    and also is there any other issues i need to take care
    please let me know



    1) Yes, ex. ip-10-190-111-104.ec2.internal Deploy
    ( hostname -i, hostname -f, name )

    2) I used x11 forwarding for 2 months.. its murder until i found out each ec2 instance has a public dns. You should be able to direct your local browser.
    ex. http://ec2-67-202-36-162.compute-1.amazonaws.com/hmc/html

    3) Puppet is heavily used to install all the hmc components, honestly you shouldn’t have to know much about it. Do not be mislead by puppet kick failed and such. If you have an error in your deploy log you can go to the root cause by doing control + F (err) also you can take a look at the logs in /var/log/hmc… One common issue is a time out in which case you can do follow this: http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/
    ( this link also shows you how to uninstall and reinstall, as the default method usually results in failures )

    4) I used HMC, although the other installer is supposed to work with RHEL / CentOS 6.x

    Good luck


    kalyan reddy

    Hi Miguel
    I almost near to install successfully after followed the doc.http://www.linuxdict.com/2012-06-auto-deploy-hadoop-cluster-with-hdp/
    since i have single node cluster
    1)do i need to change the host file to FQDN?(right now it is localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6)
    2) do i have to have the browser on my EC2 instance(or) is there a way i can call HMC from windows.
    if EC2 requires the browser please let me know the process
    3)i am following your posts ,i am not clear on Puppet concept..
    4) which installation is recommended 1)HMC 2)gc Installer.

    Thanks in advance.


    After you enable this repo:
    rpm -Uvh http://public-repo-1.hortonworks.com/HDP-

    and do the yum hmc install, you should be able to start hmc by executing: service hmc start.


    Kalyan, RHEL 6.x / CentOS 6.x are not officially supported yet.

    I followed this guide and successfully deployed HDP on CentOS 6.2. I used this right scale community ami: ami-cf18b6a6


    Tip, try the basic services first ( hdfs / mapreduce / ganglia / nagios )

    If you hit the Nagios libperl.so issue I had:


    kalyan reddy

    Hi Miguel,
    Thanks for reply.
    I changed the instance type to xLarge and the OS type is Red Hat Enterprise Linux 6.3

    And i still have the same issue.
    Am i missing any steps? I followed the document provided by Hortonworks.

    My plan is if it works well for single node cluster then i can go for multinode cluster.
    If possible please throw some light.


    Use a xLarge instance for a single node install. Micro just doesn’t have enough ram.
    The spot instances are really cheap.

    What operating system are you using?

    [root@domU-12-31-39-05-51-51 ~]# uname -a
    Linux domU-12-31-39-05-51-51 2.6.32-220.23.1.el6.centos.plus.x86_64 #1 SMP Tue Jun 19 04:14:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux

Viewing 11 replies - 31 through 41 (of 41 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.