Home Forums HDFS Web App mapreduce job failing

This topic contains 2 replies, has 2 voices, and was last updated by  tedr 1 year, 1 month ago.

  • Creator
    Topic
  • #29963

    John Brinnand
    Participant

    I have a mapreduce job which I am executing from a web application. The jobs starts up but it fails with the following well known error:

    java.lang.Exception: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
    Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable
    …………

    I have followed the general advice to fix this problem by setting the output of my map to Text and IntWritiable but it has no effect. I keep getting the same response. Here is the code.

    public class WordCountGamma extends Configured implements Tool {
    public int run(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = new Job(conf, “wordcount”);
    job.setMapperClass(WordCountMapper.class);
    job.setReducerClass(WordCountReducer.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    FileInputFormat.setInputPaths(job, new org.apache.hadoop.fs.Path(args[0]));
    FileOutputFormat.setOutputPath(job, new org.apache.hadoop.fs.Path(args[1]));

    job.waitForCompletion(true);
    return 0;
    }
    }

    public class WordCountMapper extends Mapper {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value,
    OutputCollector output, Reporter reporter) throws IOException {
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
    word.set(tokenizer.nextToken());
    output.collect(word, one);
    }
    }
    }
    public class WordCountReducer extends Reducer {
    public void reduce(Text key, Iterator values,
    OutputCollector output, Reporter reporter) throws IOException {
    int sum = 0;
    while (values.hasNext()) {
    sum += values.next().get();
    }
    output.collect(key, new IntWritable(sum));
    }
    }

    I am using hadoop-core-1.2.0 and running on Hortonworks VM. What am I missing?

    Thanks,

    John

Viewing 2 replies - 1 through 2 (of 2 total)

The topic ‘Web App mapreduce job failing’ is closed to new replies.

  • Author
    Replies
  • #29968

    tedr
    Moderator

    Hi John,

    thanks for letting us know that you found and fixed the issue.

    Ted.

    Collapse
    #29964

    John Brinnand
    Participant

    Okay – folks. I found the error. It was a library incompatibility. This issue is resolved.

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)