Real-Time Big Data Enabled Cybersecurity Analytics
Apache Metron is a big data cybersecurity application framework that enables a single view of diverse, streaming security data at scale to aid security operations centers in rapidly detecting and responding to threats.
Apache Metron is a streaming analytics application that makes it faster and easier for security operations personnel to do their job. It is a next generation SOC (security operations center) data analytics and response application that integrates a variety of open source big data technologies into a centralized tool for security monitoring and analysis.
It provides the ability to ingest, process and store diverse data feeds at scale, inclusive of security data feeds, logs, network metadata together, with capabilities for log aggregation, full packet capture indexing, storage, advanced behavioral analytics and data enrichment, while applying the most current threat-intelligence information to security telemetry within a single platform.
Apache Metron consists of 4 key capabilities
Metron provides the ability to ingest, correlate and store massive amounts of operational and cyber data in a single platform to identify and triage anomalies, benefiting all SOC personnel.
|Role||Benefit||What Metron Provides|
|CIO/CISO||Single view of risk, Improved risk mitigation, Proactive risk strategies||A single lens through which all enterprise data can be correlated, inclusive of security, network, telemetry data, as well as business sources such as HR, finance, etc.|
|Security Engineering||Security Processes and tools with a maintainable lifecycle||An integrated solution that enables efficiency by combining multiple point tools into a single one|
|Security Architecture||Ensure architecture enables security by preventing threats||An integrated cybersecurity architecture|
|SOC Analyst||Increase proficiency and efficacy||Saves months of time typically spent looking at hundreds of thousands of alerts created by noisy rules and signatures|
|SOC Investigator||Removes many steps a traditional SOC environment requires to investigate more complicated attacks like APTs||Enriches and correlates enterprise data sources to produce real-time cyber security-related events and alerts
Automatically finds and correlates relevant data and can identify and act upon the unknown
After gaining access to internal user context, hackers no longer appear as normal users on the network
|SOC Manager||Easier to assign Metron Cases to Analysts. Verifies “completed” Metron cases||Automatically creates incidents and cases because there is an integration to workflow and management systems|
|Forensic Investigator||Reduces time lag associated with current big data ingest solutions to transform detection and response to cyber-attack from 8 months to days, or even minutes||“Just in time evidence collection response” transforms and transports data in real-time on a massive scale before cybersecurity data is lost or becomes irrelevant|
|Security Platform Engineer||Streamlined operations and efficient maintenance of cyber security tool(s)||Single platform to manage and operate the ingestion, processing and interaction of cyber-related data for enterprise locations and critical assets|
|Security Data Scientist:||Easier way to search, hunt and perform data science lifecycle activities||The analytics, exposed via model as a service architecture, enables the process of feature engineering, perhaps the most complex aspect of analytics, to become considerably simplified|
Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]
A very common request from many customers is to be able to index text in image files; for example, text in scanned PNG files. In this tutorial we are going to walkthrough how to do this with SOLR. Prerequisites Download the Hortonworks Sandbox Complete the Learning the Ropes of the HDP Sandbox tutorial. Step-by-step guide […]
Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]
Introduction JReport is a embedded BI reporting tool can easily extract and visualize data from the Hortonworks Data Platform 2.3 using the Apache Hive JDBC driver. You can then create reports, dashboards, and data analysis, which can be embedded into your own applications. In this tutorial we are going to walkthrough the folllowing steps to […]
The Hortonworks Sandbox is delivered as a Dockerized container with the most common ports already opened and forwarded for you. If you would like to open even more ports, check out this tutorial.
Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]
Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]
Welcome to the Hortonworks Sandbox! Look at the attached sections for sandbox documentation.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.