Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics, offering information and knowledge of the Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
May 02, 2017
prev slideNext slide

Symantec: Real-time and Batch Processing for Security Telemetry

With the San Jose DataWorks Summit (June 13-15) just two months away, we’re busy finalizing the lineup of an impressive array of speakers and business use cases. This year our Enterprise Adoption Track will feature Vivek Madani, Sr. Principal Software Engineer, and Srinivas Vippagunta, Sr. Director Engineering, of Symantec.

Symantec helps consumers and organizations secure and manage their information-driven world. The Symantec Cloud Platform team turned to Hortonworks and Hortonworks Data Platform (HDP®) to help speed the rate with which it could ingest and process 500,000 security log messages per second (40 billion messages per day). Using HDP in the cloud, the team reduced its average time to analysis from four hours to two seconds.

Join Vivek and Srinivas on Wednesday, June 14, at 3pm as they present:

Real-time and Batch Processing for Security Telemetry

Abstract:
At Symantec, we collect inconceivable quantities of security telemetry data from our 175MM endpoints and over 57MM attack sensors that is more than 25 TB every day. We take you through our cloud journey where we talk about how we handle our security telemetry and data ingestion into our data lake for both streaming and batch use cases. Kafka, Storm and Trident form the backbone of our streaming data platform that can scale beyond 2MM events/sec. Our batch data platform follows a tiered storage model for cost efficiency. We store our data in ORC file format and use HDFS for our hot data and S3 for our warm data. We also share our learnings on Hive with tiered storage and our hits/misses with Hive transactions. We secure our stack using combination of Kafka-SSL and ACLs, AWS security groups, Ranger and Knox. Lastly, we will talk about our underlying infrastructure deployment on AWS and how we effectively use Cloudbreak along with our home-grown tools to manage our data lake.

Speakers:
Vivek Madani is a Sr. Principal Software Engineer working for Cloud Platform Engineering group at Symantec. He is a big data enthusiast, working mainly on Storm, Kafka, hadoop and other big data technologies. As part of the Cloud Platform Engineering group at Symantec, he is helping product teams architect and develop big-data applications. He currently focuses on building security data lake at Symantec on AWS that ingests and processes up to 25TB of security telemetry events a day.

Srinivas Vippagunta is a Sr. Director of engineering at Symantec. He is responsible for the big data infrastructure and data engineering functions supporting Enterprise security products along with security threat & response teams. Prior to joining Symantec, Srinivas held senior roles in the data and analytics space at hyper growth companies like Dropbox, OpenX and Groupon enabling product innovation and driving monetization. At these companies, he had the opportunity  to set the analytics strategy from the ground up, architect critical data systems for massive scale, build high performing teams and evangelize analytics and data science within the organization. Before this Srinivas held various roles at Yahoo! Srinivas holds a Bachelor’s degree in Mechanical Engineering and Master’s in Management Information Systems.

 

Be sure to register for the DataWorks Summit to catch this presentation and many others!

 

Leave a Reply

Your email address will not be published. Required fields are marked *