Deploying a lock-in free data platform is critical for an enterprise. By this, we mean using a non-proprietary code and implementing interoperability to eliminate the risk of being dependent on a single vendor for your current or future needs.
Over two thirds of respondents to our survey agree that maintaining freedom of choice was a key criterion when it came to selecting the Hortonworks Data Platform. (Source: TechValidate TVID 4A8-731-250.)
They didn’t want to be limited to what one vendor can offer – they wanted to have platform portability, industry-wide standards and choices on third party application support from a broader ecosystem.
The following shows how Hortonworks provides a lock-in free data platform.
Using Non-Proprietary Code
Not every open source platform software is vendor lock-in free. In fact, most open source data platforms that are early in the market (these may also be called “hybrid-sourced”) brings lock-in risk in an unexpected way. Any platform that has proprietary inclusions means that the customer will be forced to limitations introduced by the vendor that owns the code. The more add-ons and applications that leverage this proprietary code, the greater the lock-in risk. Unlike proprietary Hadoop providers, Hortonworks uses no proprietary code at all—everything is 100% open community developed and genuinely open. By innovating 100% of our software in the Apache Hadoop community, there are no proprietary extensions*. Therefore, our approach eliminates lock-in risk for our customers.
Industry standards are essential for ecosystem interoperability. These standards are vital for the adoption of technology because it allows customers to port their code to other systems and use various Hadoop applications together. However, while there are many Hadoop distributions and data rich applications available, a lack of standardization does create interoperability and portability challenges.
The Apache community has a clear process for managing the Hadoop ecosystem roadmap, but there’s nothing that says vendors have to be compliant with this roadmap or the community release process. Each vendor can create their own different adaptation of the open source or can create proprietary software by going off the trunk. Our strategy is to stay 100% compliant with the community release process and to always stay on the Apache trunk.
One way companies like Hortonworks are working to address the issue of standardization is with the creation of the Open Data Platform Initiative (ODPi)*. ODPi is current participants are 27 different data analytics companies, including vendors such as Hortonworks, EMC, SAS, Pivotal, and IBM. These companies have pledged to use the same versions of Apache Hadoop and Apache Ambari as the common core of their platforms. As a result of this, all of the ODPi platforms will be interoperable, and any data analytics solution can be ported to any of the ODPi compliant Hadoop distribution.
Technology Co-Development and Ecosystem Partnerships
Forming deep level partnerships with ecosystem vendors and enterprises is important to deliver the most solution choices to our customers. We have various programs*** that allow us to forge deeper technology partnerships:
Lock-in is a real risk, which is why enterprises often look for data platform providers that offer genuinely open applications with no proprietary inclusions, interoperability, industry wide standardization, technology co-development, and ecosystem partnerships. Hortonworks offers all of these things, putting as many choices as possible in the customer’s hands.
*Learn more about what 100% community development is, how it works, and what it means on the trunk in part 2 of our blog series on Insights from our Customer Experience Survey.
**You can also visit www.odpi.org to learn more about ODPi.
***Please visit our partner program website to learn more about Hortonworks ecosystems partnerships.