Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
June 07, 2017
prev slideNext slide

Walgreens Exercises Hadoop to Bring Healthy Data to the Enterprise

Exercise HadoopThe decision-making process for a customer to buy products in the retail space can range from days to seconds. The spontaneous buying patterns among consumers creates a business challenge for retailers to address their data needs just as quickly, otherwise customers will go elsewhere. When you combine a full pharmacy to the needs of a retailer, data and the problems associated with that data, increase exponentially.

For Walgreens, their business is not just about selling merchandise because the company is also committed to the health of their customers. Walgreens is one of the largest drugstore chains in the U.S., with more than 8,000 stores in all 50 states of the United States. Walgreens mission is to be America’s most-loved pharmacy-led health, well-being and beauty retailer. In just a few days Walgreens will be at this year’s San Jose DataWorks Summit. Walgreens will be present in full force. Gowri Selka, Head of Data Analytics and Corporate Technology deliver a keynote. Omkar Patel, the Director of Data and Analytics, was just announced as a 2017 Data Hero Finalist. Not only that, Gupta Narayanam, Hadoop Architect and Abey Koshy, IT Manager will address the needs for a retailer like Walgreens to have an enterprise data hub in their breakout session.

At 5:00pm on Tuesday, June 13th, Gupta and Abey will present:

Hadoop Journey at Walgreens

For a brief snapshot of what you have to look forward to in one week, read the abstract below:

Prior to 2014, Walgreens has traditional Enterprise Datawarehouse Systems that have reached the capacity limits. Over the last three years we have evolved, learned lessons, experienced successes and failures. Our initial adoption of Hadoop came from the need to run complex analytics which simply did not scale on MPP RDBMS. Our business data demands were rapidly increasing and the 8 to 12 weeks concomitant extract, transform, and load turn around cycles was not an acceptable deliverable timeframe in the retail space. A self-service model where data lands on a distributed platform, apply schema where necessary, and process at scale was a necessary paradigm for business value enablement. Our journey started with single use case which has now evolved to enterprise data hub. We will discuss following points: Evolution of our infrastructure profile, streamlining the hardware provisioning cycle, and our hybrid deployment model (on premise & cloud). Operations, how SmartSense has helped us proactively tune our cluster, and which operational tests we use for benchmarking the cluster. Monitoring, how we monitor and the tools required for enterprise grade monitoring. Security and governance how we progressed from non–compliance to enterprise grade using Ranger, Knox, Kerberos, HP voltage, encryption at rest, and many other services. 3rd Party integration with HDP, what we learned and how we overcame the challenges. Lastly, how we approach our disaster recovery strategy, what is driving the need for a DR and the key capabilities required.

There is still time to register for the San Jose DataWorks Summit. The value in hearing from some amazing Data Heroes and witnessing informational breakout sessions will drive opportunities for your own business. Register today!

Leave a Reply

Your email address will not be published. Required fields are marked *