The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

MapReduce Forum

Failed to run MapReduce job

  • #47285
    Gabriel Ciuloaica
    Participant

    Hi,

    I have installed Hortonworks on AWS using Ambari. Without any extra configuration than defaults, I’m trying to run a MapReduce job that needs to connect to a kafka broker on a different box. From same machine I can successfully connect and consume messages from kafka broker, but only when I’m running a similar code from hadoop, I’m getting the following exception:

    14/01/22 19:10:30 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/root/.staging/job_1390409931641_0005
    14/01/22 19:10:30 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.net.ConnectException: Connection refused
    Exception in thread “main” java.net.ConnectException: Connection refused
    at sun.nio.ch.Net.connect(Native Method)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500)
    at kafka.network.BlockingChannel.connect(Unknown Source)
    at kafka.consumer.SimpleConsumer.connect(Unknown Source)
    at kafka.consumer.SimpleConsumer.getOrMakeConnection(Unknown Source)
    at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(Unknown Source)
    at kafka.consumer.SimpleConsumer.send(Unknown Source)
    at kafka.javaapi.consumer.SimpleConsumer.send(Unknown Source)
    at com.devsprint.kafka.hadoop.mr.KafkaInputFormat.getSplits(KafkaInputFormat.java:45)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at com.devsprint.kafka.hadoop.mr.HadoopJob.run(HadoopJob.java:102)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at com.devsprint.kafka.hadoop.mr.HadoopJob.main(HadoopJob.java:107)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

    Any clues?

The forum ‘MapReduce’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.