Home Forums HDP on Linux – Installation Securing Cluster after HMC Installation

This topic contains 9 replies, has 4 voices, and was last updated by  Rishab Malhotra 1 year, 7 months ago.

  • Creator
    Topic
  • #13077

    I stopped all services, generated all keytabs and pushed them to all the nodes. I modified all XML file templates (*.erb files) on the master to ensure each node uses the appropriate principals and keytab files

    I am unable to get my datanodes to start, and encounter the following error “cannot start secure cluster without privileged resources”:

    ************************************************************/
    2012-12-21 08:44:35,288 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2012-12-21 08:44:35,329 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
    2012-12-21 08:44:35,541 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
    2012-12-21 08:44:35,546 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
    2012-12-21 08:44:35,546 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
    2012-12-21 08:44:35,742 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
    2012-12-21 08:44:36,031 INFO org.apache.hadoop.security.UserGroupInformation: Asked the TGT renewer thread to terminate
    2012-12-21 08:44:36,542 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user dn/slave1.hadoop.local@HADOOP.LOCAL using keytab file /etc/security/keytabs/dn.service.keytab
    2012-12-21 08:44:36,543 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:329)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:304)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1600)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1539)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1557)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1683)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1700)

    2012-12-21 08:44:36,551 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at slave1.hadoop.local/172.16.0.25
    ************************************************************/

    Does anyone know how to resolve this problem? I know the service is trying to bind to ports 1024 and leaving them as the regulary 50000+ ports, however still encounter the same problem.

Viewing 9 replies - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #13116

    tedr
    Member

    Hi Rishab,

    It looks from your reply, then, that you have mixed the HMC installer and the gsInstaller. If so, this is something that we really don’t support. If you need a secure cluster I recommend that you re-install with gsInstaller. You can download the gsInstaller scripts here:

    http://public-repo-1.hortonworks.com/HDP-1.1.1.16/HDP-gsInstaller-1.1.16-1.tar.gz

    Thanks for continuing to use HDP.
    Ted.

    Collapse
    #13112

    Seth – i used the gsInstaller setupKerberos.sh script to generate and push the appropriate keytabs to the /etc/security/keytabs directory

    I modified the core-site.xml.erb and hdfs-site.xml.erb files on the master by hand. My namenode will start up, but none of the datanodes will start due to the error in the initial post.

    Collapse
    #13111

    Seth Lyubich
    Keymaster

    Hi Rishab,

    Can you please clarify your issue? Are you trying to secure your HMC installed cluster using script from gsInstaller setupKerberos.sh? I just want to make sure that we completely understand.

    Thanks,
    Seth

    Collapse
    #13109

    Larry – Here are the keytabs on each host:

    Master Node:
    [root@master ~]# ls /etc/security/keytabs/
    dn.service.keytab jt.service.keytab rs.service.keytab
    hive.service.keytab nn.service.keytab spnego.service.keytab
    hm.service.keytab oozie.service.keytab tt.service.keytab

    Slave1 Node:
    [root@slave1 ~]# ls /etc/security/keytabs/
    dn.service.keytab nn.service.keytab spnego.service.keytab
    hive.service.keytab oozie.service.keytab tt.service.keytab
    jt.service.keytab rs.service.keytab

    Slave2 Node:
    [root@slave2 ~]# ls /etc/security/keytabs/
    dn.service.keytab rs.service.keytab tt.service.keytab
    hive.service.keytab spnego.service.keytab

    I used the setupKerberos.sh script from the gsInstaller package to set up my Kerberos server and generate keytabs. When I try and setup key tabs myself manually, I still encounter the same issue.

    Collapse
    #13095

    Larry Liu
    Moderator

    HI, Rishab,

    Can you please list the /etc/security/keytabs on all hosts?

    Please refer to the following documentation to verify the kerberos is set up correctly.

    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Deploying_Secure_Hadoop_Cluster/Deploying_Secure_Hadoop_Clusters.htm

    Thanks

    Larry

    Collapse
    #13088

    Larry – the firewall (iptables) is disabled (both ipv6 and ipv4) on all nodes. SELINUX is also disabled on both logs as well. All hosts also resolve forward and reverse dns for all other nodes.

    Collapse
    #13087

    Larry Liu
    Moderator

    Hi, Rishab

    From the namenode log file, I found the following:

    Connection from 172.16.0.25:55032 for protocol org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol is unauthorized for user nn/slave1.hadoop.local@HADOOP.LOCAL

    Look like your nn is not set up to allow access to data node. Can you please confirm?

    Thanks

    Larry

    Collapse
    #13086

    I turned off All services using the HMC interface. I am only trying to start HDFS backup.

    #1 – I started hdfs using the HMC Manage Cluster
    #2 – For hdfs, i edited core-site.xml.erb and the hdfs-site.xml.erb files on the puppet/master/templates directory.

    I am uploading “Securing Cluster after HMC Insallation.rar” which contains all logs to your ftp site

    Rishab
    rmalhotra@aptmail.com

    Collapse
    #13083

    Larry Liu
    Moderator

    Hi, Rishab

    I have a few questions for you.

    1. How did you start datanode? From HMC UI or command line?
    2. What XML file templates did you edit?

    Can you please provide the system logs (/var/log) and relevant kerberos logs?

    There is one thing you can try is to start the datanode from command line as root.

    Thanks

    Larry

    Collapse
Viewing 9 replies - 1 through 9 (of 9 total)