Back in late June when Hortonworks was officially announced at Hadoop Summit, we explained that our strategy was going to focus on accelerating the development and adoption of Apache Hadoop. We made bold statements about the opportunities that Apache Hadoop had to become the de facto platform for big data. We even predicted that half of the world’s data would be processed by Apache Hadoop within five years.
We also talked about how in order for all of that to happen, we needed to address the technical and knowledge gaps that exist. We needed to heavily invest in engineering to make Hadoop easier to install, manage and use for enterprises and more open and extensible for a growing ecosystem of technology and service providers.
Today we are making a series of announcements that are an important first step in delivering on these promises: