The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDFS Forum

mapreduce job not running on hadoop

  • #10395

    I ran a simple wordcount job on mapreduce to view the work flow in oozie, but getting failed and it shows the error message 2012-09-27 20:43:51,155 INFO [AsyncDispatcher event handler] Diagnostics report from attempt_1348722981947_0034_m_000000_0: Error: Type mismatch in key from map: expected, received
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(
    at org.wordcount.WordCount$
    at org.wordcount.WordCount$
    at org.apache.hadoop.mapred.MapTask.runOldMapper(
    at org.apache.hadoop.mapred.YarnChild$
    at Method)
    at org.apache.hadoop.mapred.YarnChild.main(

    Please help.

    Below is the logic i have in my class

    package org.wordcount;

    import java.util.Iterator;
    import java.util.StringTokenizer;

    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.mapred.FileInputFormat;
    import org.apache.hadoop.mapred.FileOutputFormat;
    import org.apache.hadoop.mapred.JobClient;
    import org.apache.hadoop.mapred.JobConf;
    import org.apache.hadoop.mapred.MapReduceBase;
    import org.apache.hadoop.mapred.Mapper;
    import org.apache.hadoop.mapred.OutputCollector;
    import org.apache.hadoop.mapred.Reducer;
    import org.apache.hadoop.mapred.Reporter;
    import org.apache.hadoop.mapred.TextInputFormat;
    import org.apache.hadoop.mapred.TextOutputFormat;

    public class WordCount {

    public static class Map extends MapReduceBase implements Mapper {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException {
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
    output.collect(word, one);

    public static class Reduce extends MapReduceBase implements Reducer {
    public void reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) throws IOException{
    int sum = 0;
    while (values.hasNext()) {
    sum +=;
    output.collect(key, new IntWritable(sum));


    public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);





    FileInputFormat.setInputPaths(conf, new Path(args[0]));
    FileOutputFormat.setOutputPath(conf, new Path(args[1]));


  • Author
  • #10397
    Sasha J


    The stack trace show that somewhere in your code, you are passing a Text variable where the system expects a longWritable. It looks like the method map in Map is getting a Text value for key instead of a longWritable.



    Hi Thanks for quick response. I need to pass a Text variable as per my requirement. This program is getting executed when i ran as a normal java but not working on mapreduce..

    please help..

    Sasha J

    Then it looks like you need to change:

    public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException {


    public void map(Text key, IntWritable value, OutputCollector output, Reporter reporter) throws IOException {


    Thanks for quick response. but the same program is running as a standalone java program but not running when i make it as a mapreduce in the cluster. Do we need to have any extra settings on cluster while running this?

    Sasha J

    I am somewhat curious about what you mean by “standalone java program.”
    You shouldn’t need any special configurations on the cluster.
    The only thing I can think that it might be is does the hadoop version that is running on the cluster support mapreduce v2? Are you submitting your job to a cluster the was installed with HMC? Currently Hortonworks does not support mapreduce v2 as it is still in alpha.

The topic ‘mapreduce job not running on hadoop’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.