Once I’ve added proxy settings to /etc/init.d/hmc the cluster prepares itself and the cluster install completes successfully. It then fails on starting the hdfs service.
Looking at the deployment logs I find that when hdfs starts the secondary name node (su – hdfs -c ‘/usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start secondarynamenode’) it attempts to create /hdfs (mkdir: cannot create directory `/hdfs': Permission denied).
Checking the /etc/hadoop/conf/hdfs-site.xml I find that many of the property values have not been set – including dfs.name.dir and other dirs which I guess is why it’s trying to create directories in the root.
I’ve not worked out how to take this one further yet so I’d really welcome any suggestions.
I have the deploy log json saved if that’s of any use.
Update 11:44 – I have entries like this:
Wed Oct 31 10:36:33 +0000 2012 /Stage/Manifestloader/Exec[puppet_apply]/returns (notice): Wed Oct 31 10:15:06 +0000 2012 Scope(Hdp2::Configfile[/etc/hbase/conf//hbase-site.xml]) (warning): Could not look up qualified variable ‘::hdp-hbase::params::hbase_hdfs_root_dir'; class ::hdp-hbase::params has not been evaluated
in the puppet_agent.log. The entries (at a first glance) seem to match up with the missing values in the config files.
/etc/hadoop/conf is a symlink to alternatives and then back to conf.empty. Not sure if that’s how it should have been left.