Hi Sudhakar, After making the changes to the slaves file did you restart HDFS and MapReduce? Thanks, Ted. ...
After making the changes to the slaves file did you restart HDFS and MapReduce?
I installed HDP for Windows on a total of 4 nodes successfully. However, when i go to the NameNode web interface to check DFS health, it says something as below.
6 files and directories, 0 blocks = 6 total. Heap Size is 41.38 MB / 3.87 GB (1%)
Configured Capacity : 0 KB
DFS Used : 0 KB
Non DFS Used : 0 KB
DFS Remaining : 0 KB
DFS Used% : 100 %
DFS Remaining% : 0 %
Live Nodes : 0
Dead Nodes : 0
Decommissioning Nodes : 0
Number of Under-Replicated Blocks : 0
There are no datanodes in the cluster.
I get the same when do ‘hadoop dfsadmin -report’ on the command prompt, i get the same report.
On another note, the Slaves and Masters file in the conf directory has only one entry ‘localhost’. I tried changing the Slaves file to include the hostnames of the slaves and restarted the services, but still no luck.
Quite stuck here. Appreciate any help I can get. Thanks.
The forum ‘HDP on Windows – Installation’ is closed to new topics and replies.
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world