I have a sandbox 1.3 on Hyper-V.
Usually when I do mapReduce with 1 million records in one file, everything is OK, but I decided to try 60 million records in several files on input. mapreduce started, made several percents of mapping and then sandbox showed me this repeating error:
INFO ipc.Client: Retrying connect to server: sandbox/192.168.11.5:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS)
Since this event I tried to restart hadoop services and whole sandbox, but it goes just two ways: veeery veeery slow reacting my commands and doesn’t complete mapreduce without any logging. Hadoop starts mapreduce, starts mapping and hangs at 0% with nothing in logging ot _temporary in output folder
just repeating this “I can’t connect to myself” error even with dfs commands. In this case I need to restart namenode.
What can it be and why? I worked with hortonworks hadoop sandbox 1.3 from-the-box and work with small amounts of data was OK. All I did – changed amount of data to input.
Sorry for my english.