The Hortonworks Blog

Posts categorized by : Apache Hadoop

Hadoop jobs have grown 200,000%. No, that’s not a typo. According to Indeed.com, Hadoop is one of the top 10 job trends right now.

When you look at LinkedIn, the growth in profiles that have SQL in them is on the downswing — about -4%, but the growth of profiles that have Hadoop in them is up 37%. Hadoop is becoming a clear resume differentiator. Updating and maintaining technical skills has always been part of the job and is part of ensuring a long and healthy career.…

Whether only beginning or well underway with Big Data initiatives, organizations need data protection to mitigate risk of breach, assure global regulatory compliance and deliver the performance and scale to adapt to the fast-changing ecosystem of Apache Hadoop tools and technology.

Business insights from big data analytics promise major benefits to enterprises – but launch of these initiatives also presents potential risks. New architectures, including Hadoop, can aggregate different types of data in structured, semi-structured and unstructured forms, perform parallel computations on large datasets, and continuously feed the data lake that enable data scientists to see patterns and trends.…

Airline pricing has always been a mystery to me, a combination of art and science allowing the airline to make as much money as possible on each flight while providing the customer the options and flexibility they want. Under the covers I know there are complex models the airlines use to determine how many seats have been sold and how much they can get for the remaining seats. I didn’t realize how seriously complex the models were but more importantly, the opportunity available to the travel industry to become more customer-centric while staying competitive by harnessing the data now available to them.…

By now, you’re probably well aware of what Hadoop does:  low-cost processing of huge amounts of data. But more importantly, what can Hadoop do for you?

We work with many customers across many industries with many different specific data challenges, but in talking to so many customers, we are also able to see patterns emerge on certain types of data and the value that could bring to a business.

We love to share these kinds of insights, so we built a series of video tutorials covering some of those scenarios:

Some more detailed discussion of these types of data is in our ‘Business Value of Hadoop’ whitepaper.…

We are excited to announce today that Hortonworks is bringing Windows-based Hadoop Operational Management functionality via Management Packs for System Center. These management packs will enable users to deploy, manage and monitor Hortonworks Data Platform (HDP) for both Windows and Linux deployments. The new management packs for System Center will provide management and monitoring of Hadoop from a single System Center Operations Manager console, enabling customers to streamline operations and ensure quality of service levels.…

Four years ago, Arun Murthy entered a JIRA ticket (MAPREDUCE -279) that outlined a re-architecture of the original MapReduce.  In the ticket, he outlined a set of capabilities that allowed processes to better share resources and an architecture that would allow Hadoop to extend beyond batch data processing.

It turned out that this ticket was prescient of true enterprise requirements for Hadoop. As enterprise adoption accelerated, it became even clearer that multiple processing models – moving beyond batch – was critical for Hadoop to broaden its applicability for mainstream usage in the modern enterprise architecture.…

This post is from Steve Loughran, Devaraj Das & Eric Baldeschwieler.

In the last few weeks, we have been getting together a prototype, Hoya, running HBase On YARN. This is driven by a few top level use cases that we have been trying to address. Some of them are:

  • Be able to create on-demand HBase clusters easily -by and or in apps
    • With different versions of HBase potentially (for testing etc.)
  • Be able to configure different Hbase instances differently
    • For example, different configs for read/write workload instances
  • Better isolation
    • Run arbitrary co-processors in user’s private cluster
    • User will own the data that the hbase daemons create
  • MR jobs should find it simple to create (transient) HBase clusters
    • For Map-side joins where table data is all in HBase, for example
  • Elasticity of clusters for analytic / batch workload processing
    • Stop / Suspend / Resume clusters as needed
    • Expand / shrink clusters as needed
  • Be able to utilize cluster resources better
    • Run MR jobs while maintaining HBase’s low latency SLAs

The Hoya tool is a Java tool, and is currently CLI driven.…

In case you haven’t heard, Hadoop 2.0 is on the way! There are loads more new features than I can begin to enumerate, including lots of interesting enhancements to HDFS for online applications like HBase. One of the most anticipated new features is YARN, an entirely new way to think about deploying applications across your Hadoop cluster. It’s easy to think of YARN as the infrastructure necessary to turn Hadoop into a cloud-like runtime for deploying and scaling data-centric applications.…

Today Concurrent announced that we have certified the Hortonworks Data Platform  against the Cascading application framework. As Hadoop adoption continues to grow more organizations are looking to take advantage of new data types and build new applications for the enterprise. By combining our enterprise-grade data platform and unparalleled growing ecosystem with the power, maturity and broad platform support of Concurrent’s Cascading application framework, we have now closed the modeling, development and production loop for all data-oriented applications.…

Over the past year, customers have told us they want to store all their data in one place and interact with it in multiple ways… they want to use Hadoop, but in order to do so, it needs to extend beyond batch.  It also needs to be interactive and real-time (among others).

This is the entire principle behind YARN, which together with others in the community, Arun Murthy and the team at Hortonworks have been working on for more than 5 years! …

There are plenty of server and storage options for the wave of data that is being collected and analyzed.  New platforms such as Apache™ Hadoop® provide the opportunity to make all the new data types being collected useful.  However, like any other platform, performance varies depending on the underlying servers being used.  There is great promise in what Hadoop can deliver in terms of business value, and the ecosystem is continuously growing with companies making strides to make Hadoop easier to deploy and manage.…

This week we’re at the Red Hat Summit along with many others enjoying the great discussions within the community. As part of the summit, we are delighted to announce extended collaboration with Red Hat to continue to advance open source big data community projects.

Some details on the the three areas of collaboration forming the announcement:

  • Enhancing Apache Ambari to support the management of Hadoop-compatible file systems, such as GlusterFS. With this integration, users will be able to provision, deploy, monitor and manage alternative file systems with Ambari, further cementing Ambari’s position as the standard for Hadoop management.

Successful social advertising campaigns today take a special blend of data intelligence and automation – enabling businesses to link fluctuations in media and tactics to sales and revenues.  Those with better data relative to their competitors, will be positioned to outperform their peers tactically and, if used effectively, strategically.  At one of the fastest growing Advertising Technology startups, harnessing Big Data made big sense in a highly competitive business environment.

The Advertising Technology startup sells Social Ad Campaign management software and wanted its in-house engineering team to focus on its core product and to outsource certain areas of its non-core technology needs.…

Talend Open Studio for Big Data provides an intuitive set of tools that make dealing with data in the Hadoop world (and Hortonworks Data Platform in particular) a lot easier.  We often use the tools often to speed delivery of a proof of concept or to operationalize movement of data from sources like web logs and machine sensors to load HDFS.  It is simple to use and typically takes only minutes to perform something that once took hours in a script.…

The Hadoop goodness just keeps on flowing as we’ve delivered new releases and new content in the past 10 days. Let’s recap.

HDP 1.3 Release. This milestone release takes advantage of improved performance in Hive 0.11 along with delivery on a series of enterprise requirements including NFS access to HDFS, improved MTTR for HBase, business continuity through HDFS and HBase snapshots, optimized connectors to Oracle and Netezza and the latest release of Ambari for management and operations.…

Go to page:« First...910111213...20...Last »

Thank you for subscribing!