Home Forums HDP on Linux – Installation HDP installation on Amazon Ec2

This topic contains 41 replies, has 4 voices, and was last updated by  Sasha J 1 year, 12 months ago.

Viewing 30 replies - 1 through 30 (of 41 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #9056

    Sasha J
    Moderator

    Sean,
    good to know, thank you for update!

    Sasha

    Collapse
    #9055

    sean mikha
    Participant

    Hi,
    Wanted to give an update that I was able to fully install HDP on a single-node Amazon EC2 instance.

    I used: RightScale CentOS 5.8 , AMI ami-4c62c025

    Had to do 2 specific things above and beyond the HDP Installation documentation:

    1) had to remove existing software on the instance, cluster would fail on the first step for cluster installation
    # yum erase -y ruby* rrdtool*

    2) during the cluster installation through the web browser, hbase test would fail with a ‘create user’ error in the deploy logs.
    I re-ran through a fresh instance and this time I manually change the HBase master / region server heap size to 1024MB after seeing a issue with the hbase region sever getting created in the logs in /var/log/hbase directory.

    Everything worked, and installed, good luck all.

    Collapse
    #8510

    sean mikha
    Participant

    Yes I used `hostname -f` for the private DNS, and `hostname -i` for the IP address

    I added the following line to my hosts file:
    10.XX.45.XX ip-10-XX-45-XX.ec2.internal

    For my hostdetail.txt file I uploaded through the web-browser I used the ip-10-XX-45-XX.ec2.internal

    The only time the public DNS is used is when I login to the ec2 instance through from my laptop/ssh client or when using the http:// through browser deployment.

    Collapse
    #8509

    Sasha J
    Moderator

    did you ensure that you used the internal ec2 names in your hosts files?

    the external names are not resolvable from inside EC2 network

    Collapse
    #8508

    sean mikha
    Participant

    Looking further, I tried looking for the specific error in the log files:
    /var/log/puppet/masterhttp.log was the only file and didn’t have much
    /var/log/hmc/hmc.log had the following error lines:

    [2012:08:20 22:49:32][ERROR][Options][configUtils.php:549][validateConfigsFromUser]: Got error when validating configs
    [2012:08:20 22:49:32][ERROR][Options][configureServices.php:50][]: Failed to validate configs from user (validate only), error=Some configuration parameters need your attention before you can proceed.

    and further below:

    [2012:08:20 22:50:05][INFO][ClusterState][clusterState.php:40][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1345503005,”context”:{“txnId”:3}}
    [2012:08:20 22:50:05][INFO][ClusterMain:TxnId=3][ClusterMain.php:322][]: Taking action=deploy on cluster=seanc, txn=3-0-0
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=HDFS, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=MAPREDUCE, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=ZOOKEEPER, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=HBASE, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=PIG, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=SQOOP, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=OOZIE, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=HIVE, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=TEMPLETON, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=DASHBOARD, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=GANGLIA, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=NAGIOS, state=
    [2012:08:20 22:50:05][ERROR][OrchestratorDB][OrchestratorDB.php:127][getClusterServices]: Found service with invalid state, service=MISCELLANEOUS, state=
    [2012:08:20 22:50:05][INFO][Cluster:seanc][Cluster.php:70][_deployHDP]: Deploying HDP with 13 services…. DryRun=1
    [2012:08:20 22:50:05][INFO][Cluster:seanc][Cluster.php:605][_installAllServices]: Installing HDP with 13 services… DryRun=1

    Collapse
    #8507

    sean mikha
    Participant

    @Sasha, forgot to mention, I did change hdp.repo.bak back to hdp.repo after re-installing rrdtool.

    Collapse
    #8506

    sean mikha
    Participant

    @Sasha thanks for the quick response. I again started with a clean image and followed your directions above, the webpage for HMC did work this time, however when trying to deploy my cluster I experienced what I believe to be a puppet kick issue. I found the sticky note on pre-deploy and attempted that method. Found another small bug, fixed that, re-deployed, and again had the same puppet kick which may be a new issue not identified yet. I’ve broken down a detailed set of steps and thoughts for what I did, maybe you can catch a mistake in my thought process or setup? VERY MUCH appreciate you looking at this, thanks for taking the time.

    SETUP:
    [AMAZON EC2, AMI: ami-53b9603a , INSTANCE SIZE: LARGE , SECURITY-GROUPS=everything open]

    (setup hosts)
    #vi /etc/hosts
    [added #hostname -i #hostname -f]

    (setup password-less SSH)
    [upload id_rsa/private-amazon-key]
    #chmod 700 /root/.ssh
    #chmod 640 /root/.ssh/authorized_keys
    #chmod 600 /root/.ssh/id_rsa
    #ssh localhost
    #ssh private-dns

    (start ntp service)
    #service ntp status (service does not exist)
    #yum install -y ntp
    #service ntpd start
    #chkconfig ntpd on
    #service ntpd status

    (stop iptables/firewall service)
    #/etc/init.d/iptables stop
    #chkconfig iptables off
    #service iptables status

    (disable selinux)
    #setenforce 0
    #sestatus

    (check dns / reverse dns)
    #host
    #host (#ifconfig)

    (check existing S/W)
    #rpm -qa | grep -ie ruby -ie puppet -ie passenger -ie nagios -ie mysql -ie ganglia
    (shows ruby , mysql -addressed in bugfix below)

    #rpm -qa | grep -ie yum -ie rpm -ie scp -ie curl -ie wget -ie pdsh
    (shows scp + pdsh missing)
    (test: #which scp [+check])
    (test: #which pdsh [-missing])

    BUG FIX:
    ===================================================================
    (HDP Forum (sasha)-AMI has wrong packaging)

    #yum erase ruby php MySQL-server-community rddtool

    (comment out Rightscale-EPEL from CentOS-base.repo)
    #cat /etc/yum.repos.d/Cent* | grep -i right
    (does not exist in CentOs repo files)
    (instead changed rightscale.repo file to rightscale.repo.bak)

    #yum clean all
    #yum install net-snmp net-snmp-utils ruby rddtool
    ===================================================================

    (download + install hmc)

    #rpm -Uvh http://public-repo-1.hortonworks.com/HDP-1.0.1.14/repos/centos5/hdp-release-1.0.1.14-1.el5.noarch.rpm
    #yum install pdsh
    #yum install epel-release
    #yum install php-pecl-json
    #yum install hmc

    #service hmc start

    Deploy browser (http://ec2-public-dns/hmc/html/index.php)
    cluster name: seanc
    upload private key + hostdetail (single node/ ec2-private-dns/fqdn)
    select all services
    default all services -single node
    disk mount point: /hdp1/1 (on terminal: #mkdir /hdp1 /hdp1/1 #chmod 777 /hdp1 /hdp1/1)
    enter only pw’s and email into details
    use all defaults
    click deploy

    ERRORS:
    ===============================================================================================
    2″: {
    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    "ip-10-191-45-210.ec2.internal"
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [],
    “PUPPET_OPERATION_SUCCEEDED”: []
    ===============================================================================================

    Possible Bug Fix: Puppet Kick / Pre-deploy (http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/)

    #yum erase hmc puppet
    #yum install hmc
    #yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo zookeeper zookeeper hbase hbase hbase mysql-server hive mysql-connector-java hive hcatalog oozie.noarch extjs-2.2-1 oozie-client.noarch pig.noarch sqoop mysql-connector-java templeton templeton-tar-pig-0.0.1.14-1 templeton-tar-hive-0.0.1.14-1 templeton hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons snappy snappy-devel lzo lzo lzo-devel lzo-devel

    ERRORS:
    ===============================================================================================
    ruby-RRDtool-0.6.0-6.el5.x86_64 from installed has depsolving problems
    –> Missing Dependency: librrd.so.2()(64bit) is needed by package ruby-RRDtool-0.6.0-6.el5.x86_64 (installed)
    Error: Missing Dependency: librrd.so.2()(64bit) is needed by package ruby-RRDtool-0.6.0-6.el5.x86_64 (installed)
    You could try using –skip-broken to work around the problem
    You could try running: package-cleanup –problems
    package-cleanup –dupes
    rpm -Va –nofiles –nodigest
    ===============================================================================================

    (tried to fix)
    #yum remove rrdtool
    #mv hdp.repo hdp.repo.bak
    #yum install rrdtool
    #service hmc start
    #mkdir /hdp2 /hdp2/2
    #chmod 777 /hdp2 /hdp2/2

    (re-ran through deployment above)

    ERRORS:
    ===============================================================================================
    “2″: {
    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    "ip-10-191-45-210.ec2.internal"
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [],
    “PUPPET_OPERATION_SUCCEEDED”: []
    ===============================================================================================

    Collapse
    #8504

    Sasha J
    Moderator

    @Sean

    Could you start with a clean image and do the following:

    > ensure SELinux is DISABLED
    > service iptables stop
    > yum erase ruby php MySQL-server-community rddtool
    > comment out Rightscale-EPEL from CentOS-base.repo
    > yum clean
    > yum install net-snmp net-snmp-utils ruby rddtool

    Sasha

    Collapse
    #8503

    sean mikha
    Participant

    @Sasha,
    Yup starting from a clean image. Still have the same issue, and still have some odd things happen. When I start hmc service I get some odd output about the gemspec configuration being off, looks like a version/dependency issue.

    Also as a side note. I added the hdp repo before I de-installed/installed the rrdtool and got some dependency issues when it tried to get it from the hdp repo. So I went back to use the CentOS and it seemed to compile ok.

    Collapse
    #8497

    Sasha J
    Moderator

    Hi Sean,

    your target may have reached some inconsistent state after your previous install attempts.
    are you starting with a clean image?

    Sasha

    Collapse
    #8496

    sean mikha
    Participant

    @Sasha,
    Tried re-installing the rrdtool, then restarted hmc, still same issue httpd serves the standard apache welcome page but will not display /hmc/html or /hmc/html/index.php.

    Output before:
    # yum -d 0 -e 0 -y install ganglia-gmetad-3.2.0
    ruby-RRDtool-0.6.0-6.el5.x86_64 from installed has depsolving problems
    –> Missing Dependency: librrd.so.2()(64bit) is needed by package ruby-RRDtool-0.6.0-6.el5.x86_64 (installed)
    Error: Missing Dependency: librrd.so.2()(64bit) is needed by package ruby-RRDtool-0.6.0-6.el5.x86_64 (installed)
    You could try using –skip-broken to work around the problem
    You could try running: package-cleanup –problems
    package-cleanup –dupes
    rpm -Va –nofiles –nodigest

    Output disappeared re-installing rrdtool.
    # yum -d 0 -e 0 -y install ganglia-gmetad-3.2.0
    #

    Collapse
    #8493

    sean mikha
    Participant

    Thanks. Is there a default AMI you would recommend with HDP 1.0.1.14 and the HDP Documentation?

    Collapse
    #8492

    Sasha J
    Moderator

    That AMI might have some packaging issues.

    try:
    $ yum -d 0 -e 0 -y install ganglia-gmetad-3.2.0

    if this shows:
    install of rrdtool-1.4.5-1.el5.x86_64 conflicts with file from package
    rrdtool-1.2.27-3.el5.i386

    then:
    yum erase rrdtool
    yum -y install rrdtool

    -Sasha

    Collapse
    #8491

    sean mikha
    Participant

    Hi Sasha,
    I tried using: ami-53b9603a , large, centos 5.8 (is the the one you are referring to?)

    setup hosts: vi /etc/hosts #added hostname -i \t hostname -f
    existing S/W: not touching existing installs because HortonWorks AMI (spot check shows ruby,mysql)
    ssh no-pw: upload id_rsa, set permissions, check (ssh localhost, ssh private-dns)
    start ntp: service ntpd start, chkconfig ntpd on
    check dns: host `hostname -f`, host `hostname -i`
    create hostdetail: use private dns/fqdn
    download install hmc (centos 5):
    rpm -Uvh http://public-repo-1.hortonworks.com/HDP-1.0.1.14/repos/centos5/hdp-release-1.0.1.14-1.el5.noarch.rpm
    yum install epel-release
    yum install php-pecl-json
    yum install hmc
    service hmc start (y,y)
    recieved warning: WARNING: Invalid .gemspec format in ‘/usr/lib/ruby/gems/1.8/specifications/gherkin-2.2.4.gemspec’
    stop iptables: /etc/init.d/iptables stop, chkconfig iptables off

    Now I tried navigating to http://public-ec2-dns-instance/hmc/html/index.php didn’t work.
    however http://public-ec2-dns-instance/ does work (the httpd is up and running correctly with iptables off)

    Not sure if you have seen this issue before, and if I am using the correct AMI? I think the issue might be around the pre-installed S/W but not sure. Did I miss a step?

    Collapse
    #8489

    Sasha J
    Moderator

    Hi Kaylan,

    you will have to enable the optional repos on RHEL

    if you don’t have an active RHEL subscription you may want to :

    1) use the AMI image we have pre-made, which is ready to run

    2) use CENTOS 5.x / 6.x

    -Sasha

    Collapse
    #8485

    sean mikha
    Participant

    Hi Miguel
    I tried installing a HMC cluster according to:

    http://www.linuxdict.com/2012-06-auto-deploy-hadoop-cluster-with-hdp/

    using the community AMI you mentioned: ami-cf18b6a6 , with Large instances

    Followed every step mentioned, as well as some other good to knows:
    potential bug (nagios): http://hortonworks.com/community/forums/topic/nagios/
    common issues (HDP): http://hortonworks.com/community/forums/topic/common-issues/

    However I can’t get the HMC system to deploy to the nodes either in single or multi-node. The error I recieved seems to be related to ‘Puppet’ so I tried:

    http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/

    Didn’t work. I ran the check script and received the following errors (as well as uploaded to FTP site):

    =========== =========== ===========
    HMC failures
    [2012:08:20 15:15:29][INFO][PuppetFinalize:txnId=1:subTxnId=104][finalizeNodes.php:156][sign_and_verify_agent]: Puppet cert sign status, totalHosts=1, succeededHostsCount=0, failedHostsCount=1
    [2012:08:20 15:15:34][INFO][PuppetFinalize:txnId=1:subTxnId=104][finalizeNodes.php:235][sign_and_verify_agent]: Puppet agent ping status, totalHosts=1, succeededHostsCount=1, failedHostsCount=0
    [2012:08:20 15:15:34][INFO][PuppetFinalize:txnId=1:subTxnId=104][finalizeNodes.php:369][]: Puppet finalize, succeeded for 1 and failed for 0 of total 1 hosts
    [failed] => Array
    [2012:08:20 15:24:55][ERROR][Cluster:seanc][Cluster.php:677][_installAllServices]: Puppet kick failed, no successful nodes
    [2012:08:20 15:24:55][INFO][ClusterMain:TxnId=3][ClusterMain.php:332][]: Completed action=deploy on cluster=seanc, txn=3-0-0, result=-3, error=Puppet kick failed on all nodes
    [2012:08:20 15:24:56][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1345476296,”context”:{“status”:false,”txnId”:”3″}}
    [2012:08:20 15:24:56][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1345476296,”context”:{“status”:false,”txnId”:”3″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

    =========== =========== ===========

    Appreciate any insights you can provide, thanks.

    Collapse
    #7959

    Kalyan,

    I believe the command is: hadoop dfs -ls /
    note the “/”
    also the default hadoop user is hdfs and only that user had write privileges.
    you can give other users write privileges by adding them to the hdfs group.
    usermod -a -G hdfs myfavoritehadoopusername

    ~regards

    Collapse
    #7943

    kalyan reddy
    Member

    Hi Miguel,
    when i try to run some shell commands i faced the following issue
    [root@domU-12-31-39-04-4C-5E bin]# hadoop dfs -ls
    ls: Cannot access .: No such file or directory.
    [root@domU-12-31-39-04-4C-5E bin]# pwd
    /usr/lib/hadoop/bin

    since i deployed HDP with Hmc as a root user , i should be able to run the shell commands.from root user. right?
    pls correct me …. we dont have to start all the services like name node,sec name node,job trac,task trac etc..right?
    Please let me know where i am missing.
    Thanks

    Collapse
    #7938

    Kalyan, Awesome :)
    For Hive I would wait on an official HMC release from Hortonworks that supports RHEL 6.x
    Mapreduce:

    http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html

    http://code.google.com/edu/parallel/mapreduce-tutorial.html

    Cheers

    Collapse
    #7937

    kalyan reddy
    Member

    Hi Miguel
    yeah ..i am able to setup the cluster .:)
    thanks so much for taking your time to help me out
    i am just wondering if we have any docs available to do some sample stuff/examples
    some MR stuff etc
    i want to enhance knowledge on this
    and also next step would be adding the hive service
    again i am very thankful to you.

    Collapse
    #7936

    Kalyan, the log indicated the nagios-php-pecl-json issue.

    I understand you removed it from the appropriate file but it gets put back there when you run
    yum -y erase hmc puppet
    yum install hmc

    so run the above, edit the file again, and you should be good.
    when you did the initial install of hmc it prompts you to download the java jdks and you can leave those 2 fields blank hmc will auto check for the jdk’s in the appropriate directories.

    Cheers,
    Miguel

    Collapse
    #7935

    kalyan reddy
    Member

    Hi Miguel
    Good day
    please find the below log information and do the needful

    DeployLog

    “\”Thu Aug 02 02:55:37 -0400 2012 /Stage[12]/Hdp-nagios::Server::Packages/Hdp-nagios::Server::Package[nagios-php-pecl-json]/Hdp::Package[nagios-php-pecl-json]/Hdp::Package::Yum[nagios-php-pecl-json]/Package[php-pecl-json.x86_64]/ensure (err): change from absent to present failed: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install php-pecl-json.x86_64′ returned 1: Error: Nothing to do\””,

    Puppet log

    [root@domU-12-31-39-04-4C-5E ~]# cat /var/log/puppet_apply.log | grep err
    Thu Aug 02 02:55:37 -0400 2012 /Stage[12]/Hdp-nagios::Server::Packages/Hdp-nagios::Server::Package[nagios-php-pecl-json]/Hdp::Package[nagios-php-pecl-json]/Hdp::Package::Yum[nagios-php-pecl-json]/Package[php-pecl-json.x86_64]/ensure (err): change from absent to present failed: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install php-pecl-json.x86_64′ returned 1: Error: Nothing to do
    [root@domU-12-31-39-04-4C-5E ~]#

    HMC log

    [2012:08:02 06:56:04][INFO][ClusterMain:TxnId=3][ClusterMain.php:332][]: Completed action=deploy on cluster=kalyantest1, txn=3-0-0, result=-3, error=Puppet kick failed on all nodes

    My Host file lookes like following
    10.240.83.172 domU-12-31-39-04-4C-5E.compute-1.internal hortondeply
    and also removee the “,’nagios-php-pecl-json’”

    And also while deploying in the custom configfile i left blank the following
    Java 32 bit Home
    Java 64 bit Home — as per the Horotnworks manual Use this to specify your Java home if you wish to have HDP use an existing JDK that is not set up by HMC. You must have installed this separately and exported it. This value must be the same for all hosts in the cluster. If you use this option, HMC does no further checks to make sure that your Java has been properly set up.

    many thanks

    Collapse
    #7932

    Kalyan,

    rest assured your not the only one who had trouble with the install.
    you can uninstall – reinstall by executing
    yum -y erase hmc puppet
    yum install hmc

    look at your logs, both /var/log/puppet_apply.log and /var/log/hmc/hmc.log
    they are big so try this
    cat /var/log/puppet_apply.log | grep (err)
    without knowing what your error is we can’t help you.

    the php-pecl-json package is included in php5.3 so you need to remove it from the nagios package dependencies, if you don’t you will get an error during deployment.

    vim /etc/puppet/master/modules/hdp-nagios/manifests/server/packages.pp
    %s/,’nagios-php-pecl-json’//g

    you just have to remove the “,’nagios-php-pecl-json’”
    and save the file.

    If you run into an issue please search the forums as it is probably resolved.

    Cheers,
    Miguel

    Collapse
    #7930

    kalyan reddy
    Member

    Hi Miguel,
    Sorry to come back again with questions. :(
    I am able to hack the rdehat release file and the OS issues is solved.
    After configure all, I was unable to start the cluster successfully
    when i try to uninstall the cluster it also was failed.
    I am looking in to json logs and is there any specific issue i must look into,
    Kindly share ur past exp if u face similar kind of problem.

    between, in the document, i did not perform this step
    # update puppet for the nagios. remove php-pecl-json. because php5.3 already include it by default.
    line 11+ /etc/puppet/master/modules/hdp-nagios/manifests/server/packages.pp

    i am not clrear the above….does this have any significance?please let me know if its required.
    Many thanks

    Collapse
    #7894

    Kalyan,
    I managed to deploy HDP on RHEL 6.3 with HMC. It’s pretty much the same process.
    just diable the hive service and you should be fine.

    Good luck,
    Miguel

    Collapse
    #7844

    Kalyan,
    I know why you can’t find that ami :). I remember having this same issue before, It is available under a different region… In your aws management console, ( ec2 ) there is a drop down menu on the top left named “Region:”. You will find the ami I suggested under the US East ( Virginia ) region.

    Otherwise try to find a CentOS 6.x image that allows you to run large instances.

    Cheers,
    Miguel

    Collapse
    #7827

    Kalyan,
    does RHEl not have an /etc/redhat-release file you can modify?

    Also ami-cf18b6a6 is definitely available. make sure you copy paste it into the search box and make sure you don’t have leading spaces.

    as mentioned above you simply modify the file redhat-release in your /etc direcotry and change the version. this hack was provided by Edy on this thread:

    http://hortonworks.com/community/forums/topic/tips-for-guys-who-want-try-hdp/

    Collapse
    #7824

    kalyan reddy
    Member

    Miguel,
    Well, appreciate for taking your time.

    I am wondering is there a way to do the version hack for redhat …like you did for centos

    Actually i used redhat so i did not try centos.
    So now i would use centos , but in community we dont have ami-cf18b6a6 and having only one (ami-00934969-ebs
    250188540659/CentOS 6.2 (Bare))
    Would it be fine to go with this,please confirm.
    And while using this i need to do the version hack!!! what is the best way to do this?

    Thanks

    Collapse
    #7818

    Kalyan,
    I haven’t tried this on RHEL. did you try the version hack? vim /etc/redhat-release ( change 6.x to 5.8 )
    Several people have confirmed success on CentOS 6.2 with this modification.
    Also, for your id_rsa, did you execute: cat .ssh/id_rsa.pub >> .ssh/authorized_keys this adds your public key to authorized hosts allowing you to ssh…

    Also, as you said, I confirmed a full HDP deployment on 3 nodes using the amazon private key.
    If you wish to use CentOS I recommend looking up ami-cf18b6a6 under community amis.

    Collapse
    #7813

    kalyan reddy
    Member

    Hi Miguel
    id_rsa is not working for me but amazon key is working fine ..but i am getting some other error
    Finding reachable nodes — Successful with no errors.
    Obtaining information about reachable nodes —- Successful with no errors.
    Verifying and updating node information—–Failed. Reason: Unsupported OS

    Since we dont have redhat 5.x in amzon , i am going for 6.x,should i use centos 6.x?(or) the community edition of Amazon AMI

    Thanks

    Collapse
Viewing 30 replies - 1 through 30 (of 41 total)