Home Forums HDP on Windows – Installation Error message installing single node hdp on win server 2012

Tagged: 

This topic contains 6 replies, has 4 voices, and was last updated by  Geoffrey Malafsky 4 months, 1 week ago.

  • Creator
    Topic
  • #48632

    Remco Nicolai
    Participant

    HADOOP: Giving user/group “WIN-2O5K3MRCNAV\hadoop” full permissions to “c:\hdpdata\hdfs”
    HADOOP: icacls “c:\hdpdata\hdfs” /grant WIN-2O5K3MRCNAV\hadoop:(OI)(CI)F
    processed file: c:\hdpdata\hdfs
    Successfully processed 1 files; Failed processing 0 files
    HADOOP: Adding slaves list WIN-2O5K3MRCNAV to c:\hdp\hadoop-2.2.0.2.0.6.0-0009\etc\hadoop\slaves
    HADOOP: Removing dfs.namenode.name.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.
    HADOOP: Removing dfs.datanode.data.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.
    HADOOP: Removing dfs.namenode.checkpoint.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.
    HADOOP: Removing dfs.namenode.checkpoint.edits.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.
    HADOOP: Removing mapreduce.cluster.local.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.
    HADOOP: Formatting Namenode
    HADOOP: HADOOP_HOME set to “c:\hdp\hadoop-2.2.0.2.0.6.0-0009″
    HADOOP: c:\hdp\hadoop-2.2.0.2.0.6.0-0009\bin\hdfs.cmd namenode -format
    HADOOP-CMD FAILURE: 14/02/12 13:09:53 INFO namenode.NameNode: STARTUP_MSG:
    HADOOP-CMD FAILURE: /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = WIN-2O5K3MRCNAV/192.168.182.128
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 2.2.0.2.0.6.0-0009
    STARTUP_MSG: classpath = c:\hdp\hadoop-2.2.0.2.0.6.0-0009\etc\hadoop;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\activation-1.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\asm-3.2.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\avro-1.7.4.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-cli-1.2.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-codec-1.4.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-collections-3.2.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-compress-1.4.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-configuration-1.6.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-digester-1.8.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-el-1.0.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-httpclient-3.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-io-2.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-lang-2.5.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-lang3-3.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-logging-1.1.1.jar;c:\hdp\hadoop-2.2.0.2.0.6.0-0009\share\hadoop\common\lib\commons-math-2.1.jar;c:\hdp\hadoop-2.2.0

Viewing 6 replies - 1 through 6 (of 6 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #52215

    Geoffrey Malafsky
    Participant

    I had checked the clusterproperties.txt and it included everything set in the MSI settings window but nothing about dfs.namenode.dir or others. With the single node installation, it rolls back and so deletes the partially installed files. I did get it to complete installing by using the multi-node option and just using the same settings (1 machine name) for all settings as with the single node option. In this case, hdfs-site.xml does have correct settings for xxx.dir (e.g. dfs.namenode.name.dir is file:///c:/hdpdata/hdfs/nn ). After installation, cluster.properties is the same as clusterproperties.txt. However, a separate problem was HWI service would not start until I changed the hive-site.xml file to replace war file value from lib\hive-hwi-@hive.version@.war to lib\hive-hwi-0.13.0.2.1.1.0-1621.jar. I have not yet done the smoke tests nor examined whether things are really working other than getting to the management web pages for YARN, HBASE and NameNodes.

    In addition, the hive log contained error messages about DERBY that may still be a problem, beginning with the following warnings:

    2014-04-23 13:05:52,525 WARN conf.HiveConf (HiveConf.java:initialize(1409)) – DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead —> to error
    2014-04-23 13:06:12,062 WARN DataNucleus.Datastore (Log4JLogger.java:warn(96)) - Error initialising derby schema : FUNCTION 'NUCLEUS_ASCII' already exists. java.sql.SQLTransactionRollbackException: FUNCTION 'NUCLEUS_ASCII' already exists. at org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(Unknown Source)
    and including specific errors creating tables:

    2014-04-23 13:06:33,947 ERROR DataNucleus.Datastore (Log4JLogger.java:error(115)) - Error thrown executing CREATE TABLE DBS
    (
    DB_ID BIGINT NOT NULL,
    "DESC" VARCHAR(4000),
    DB_LOCATION_URI VARCHAR(4000) NOT NULL,
    "NAME" VARCHAR(128),
    OWNER_NAME VARCHAR(128),
    OWNER_TYPE VARCHAR(10)
    ) : A lock could not be obtained due to a deadlock, cycle of locks and waiters is:
    Lock : ROW, SYSTABLES, (1,3)
    Waiting XID : {272, X} , APP, CREATE TABLE DBS
    (
    </code

    Collapse
    #52206

    Dave
    Moderator

    Hi,

    Can you check the hdfs-site.xml and also the cluster.properties.file for dfs.namenode.dir, datanode.dir etc.

    What are these set to?

    Thanks

    Dave

    Collapse
    #52188

    Geoffrey Malafsky
    Participant

    I have the same error. I did change Java to use JDK instead of JRE for a different error (oozie cannot execute jar xxx) but that change did not help the current problem. The setup is from the settings window in HW MSI so there are no other directories passed. I am using latest Windows package. Version is 2.4.0.2.1.1.0-1621

    Collapse
    #49412

    Dave
    Moderator

    Hi Remco,

    What directories did you configure for dfs.namenode.name.dir and dfs.datanode.data.dir as the rd command is passing the directory as blank – which is why you are seeing the incorrect syntax messages.

    Thanks

    Dave

    Collapse
    #48924

    Remco Nicolai
    Participant

    It seems to me it fails at:

    HADOOP: Adding slaves list WIN-2O5K3MRCNAV to c:\hdp\hadoop-2.2.0.2.0.6.0-0009\etc\hadoop\slaves
    HADOOP: Removing dfs.namenode.name.dir ->
    HADOOP: rd /s /q “”
    HADOOP-CMD FAILURE: The syntax of the command is incorrect.

    But is it possible for me to send you the complete log file?

    Collapse
    #48859

    Seth Lyubich
    Keymaster

    Hi Remco,

    Can you please let us know at which point installation failed? Do you have issue starting NameNode? If yes, can you please check NameNode logs and see if you can find more details in the logs?

    Thanks,
    Seth

    Collapse
Viewing 6 replies - 1 through 6 (of 6 total)