Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.
AVAILABLE NEWSLETTERS:
Thank you for subscribing!
Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.
Thank you for subscribing!
It was 10 years ago today (Feb 2) that my first patch (https://issues.apache.org/jira/browse/NUTCH-197) went into the code that two days later became Hadoop (https://issues.apache.org/jira/browse/HADOOP-1).
I had been working on Yahoo Search’s WebMap, which was the back end that analyzed the web for the search engine. We had been working on a C++ implementation of GFS and MapReduce, but after hiring Doug Cutting decided that it would be easier to get Yahoo’s permission to contribute to code that was already open source rather than open source our C++ project.
Last week, I did some software archaeology and checked out the code that Doug Cutting, Mike Carafella and I (via my small patch!) wrote. I’d like to encourage you to check it out to see how far Hadoop has come over the years. To make it easy, I created a Docker image at https://github.com/hortonworks/hadoop0 that let’s you play with that early version of Nutch DFS and MapReduce. I also back ported the WordCount example that I wrote for Hadoop so that it runs against Nutch and included it in the Docker container.
Some fun points to notice:
There is no tracking of users or creation/submission times.
The JobTracker has a Web UI, but it is really primitive. The NameNode doesn’t have a UI at all.
The MapReduce job and task names are all randomly generated.
There isn’t a Secondary NameNode, so you need to restart your NameNode every couple of days to compact the edit log.
The reduces randomly ask each TaskTracker whether they have each specific map output.
Rather than programmatically submitting jobs, the developer was expected to create an XML file that described their job.
There was no support for retrying failed MapReduce tasks. Any failed task killed the entire job.
It has been an amazing 10 year journey taking Hadoop from a small unknown project to a project that has become the world’s big data platform. Another measure of how far we’ve come is that NDFS had 5kloc and MapReduce had 6kloc back in February 2006. Compare that to the 300kloc added to the Hadoop project in 2015 alone. (http://ajisakaa.blogspot.com/2016/01/the-activities-of-apache-hadoop.html?m=1)
In honor of Hadoops 10 Birthday, be sure to attend one of our two 10 Year Anniversary Parties – click here
This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.
© 2011-2018 Hortonworks Inc. All Rights Reserved.
Tags: