The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

HDP2.0: Datanode does not work, DataXceiver error processing unknown operation

  • #30875


    I am evaluating HDP 2.0 on CentOS 6.3 running on Windows Azure. However I do not manage to get the datanode running on my 1-node cluster installation.

    The datanode log file contains the following entry:

    2013-08-06 08:16:24,813 ERROR datanode.DataNode ( – hdp201:50010:DataXceiver error processing unknown operation src: / dest: /
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(

    I have opened the following ports in the Windows Azure Firewall: 50010,50020,50070,50075,50090,8020,8010. The CentOS machine does not have a firewall (iptables is disabled).

    Datanode registration first did not work, but after I added “ hdp201” to the /etc/hosts file, it did work. So in it is stated that the datanode is running.

    However when I try to run the following commands from, they do not work

    su $HDFS_USER

    hadoop fs -ls
    ls: `.’: No such file or directory

    hadoop fs -mkdir /user/hdfs
    mkdir: `/user/hdfs’: No such file or directory

    Note that the environment variables, like HDFS_USER, are correctly set.

    Thank you in advance for your help!

  • Author
  • #30923
    Sasha J

    try this command:
    hadoop fs -ls /

    It will show you what actual folders you have in your HDFS.
    most likely, your /user folder does not exist.

    Also, make sure that your actual datanode process running.

    And as a last recommendation, forget about Azure, just use Sandbox (downloadable from Hortonworks web site) and use it at your pleasure on your local computer.

    Thank you!

    PS. this is actually wrong thread for questions on HDP 2.0….


    Hi Sasha,

    thank you for your reply! I have tried “hadoop fs -ls /” and it does not return anything (at least no error), which should be ok for an empty hdfs. However the HDP2.0 documentation says that I should run “hadoop fs -mkdir /user/hdfs” after formatting the hdfs. Maybe the beta documentation is misleading in that manner (see

    I am using the manual installation, because the sandbox is not available for HyperV, which could be uploaded to Azure. I want to use Azure, because it allows me to setup 10 or more virtual machines.

    Please note that the datanode log still lists “hdp201:50010:DataXceiver error processing unknown operation src: / dest: /”, although the directory “user” could be created with “hadoop fs -mkdir /user”. You are right, running “hadoop fs -mkdir /user/hdfs” requires “/user” to exist first.

    Thank you!


    Sasha J

    I see your point…
    Please look at the datanode logs on both mentioned machines: src: / dest: /
    Also, take a look to namenode log.

    People usually do not use Azure for such things, people use VMWare virtualization, or Virtual box, or EC2.

    Thank you!

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.