Get Started


Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
March 02, 2016
prev slideNext slide

Cybersecurity: Conceptual architecture for analytic response

Welcome back to my blogging adventure.  If you’ve been reading my Cybersecurity series;  “echo: hello world”, “Cybersecurity: the end of rules are nigh”, and Cybersecurity: why context matters and how do we find it you know just how much time I’ve spent explaining why an integrated cybersecurity analytic solution should focus on delivering value and making the lives of the folks doing incident response easier.  As I look across the landscape of security analytic offerings, I see walled gardens consisting of proprietary models and pretty dashboards. Yes, walled gardens are pretty, well maintained places to visit; however, we can’t live there because they don’t meet our needs.  Our offices and living rooms are cluttered and organized around how we live and not some pretty picture in an interior design magazine. I believe that a real cybersecurity solution should aim to reflect our work spaces; functional and configurable to how we want to work and not some engineer’s idea of what’s best for us.

Conceptual Architecture

Today, we will go over a high level conceptual architecture for a practical cybersecurity analytic framework that works for us by adapting to how we do business.  Before we dive in let’s give the 100,000 foot overview of what the conceptual architecture looks like.

Screen Shot 2016-03-02 at 6.58.09 AM

Data Flow

The critical path in the architecture is the red arrow in the middle.  We need to take raw sensor data and reliably generate an automated response.  Like our messy living room or office; it is the output of the work and not the pretty picture that provides value.  If the analytic models and response rules can make that call for response then no pretty dashboard is required.  Why build in a big red button for the SOC analyst to click if an invisible response is faster?


The sensors component is the data ingestion point of all machine data in the company and acts as the interface to the data flow. The critical path starts here. Automation and remote management of these sensors allows for efficient operation and flexible response mid-incident if greater data volume or fidelity is required. I foresee a shift from niche security products towards sensors embedded in our application architecture; as monolithic applications transform into as-a-service cloud enabled components, our security controls must transform along with them.

Automated Response

This is where the system provides maximum value. Regardless of whether the analytical models & rules, or the workflow and manual review through the user interface triggered the response event, automation of the response activity is part of the critical path. The automated response provides the automation interface to the rest of the company’s assets for command and control. Again, I foresee a shift from specialized security products towards automated response components embedded in our application architecture.  These embedded sensors and response components will give a new, truer meaning to data centric security in the internet of anything.

The Internet of Anything


Data Centric Security

Data Lake

Data is stored in the data lake for historical analytic replay as new knowledge becomes available is a key advantage in this approach. In addition, this data is available for training models regarding normal and abnormal behavior, and allowing the simulation of new automated response capability.  The ability to demonstrate with actual data that a new automated blocking capability, when replayed over the last three years of collected data, wouldn’t have caused negative impact to business operations is necessary in gaining approval for implementation.


Both historical data in the data lake and streaming live data flowing through the analytical models:

  • Generate baseline understanding of normal and abnormal activity
  • Create the full picture context of what is happening on the applications, networks, and systems
  • Enrich and correlate information into full context events for either automated response or manual review.


After the analytic models have transformed the raw data flowing through the system into enriched data elements that are both descriptive and predictive in nature the rules engine applies the company’s prescriptive rules or policy on how those events need to be handled.  This is critically important in allowing an organization to apply their own risk tolerance to the response process.


Similar to rules in that they allow the company to configure solution to meet their needs. Workflow allows the company to configure the incident response steps and automated response in a manner that enables the business instead of the business bending around the solution.  This multi-user/multi-tenant workflow engine allows for cross organization response to be configured. In addition, by being part of the analytic solution, key performance and risk metrics can be collected to: measure the health of the process, allow for security analyst performance review and on the job training, and make the work visible in a manner that shows the value to the organization as a whole.


This is the layer that provides visual interface elements that visualize data.  By refactoring these dashboard elements away from the user interface, we enable each user to create their own user interface experience and provide a consistent visualization of the data across user interface displays for efficient cognitive uptake of information.

User Interface

It is important that the user interface elements are decoupled from the rest of the solution stack. If we are going to hit the goal of a single pane of glass view to the analytic response process we need a user interface that: adapts to the user’s needs and changing roles, fine grained security for multi-user/multi-tenant access through the user interface, and a pluggable design that allows both workflow steps and dashboard elements to be combined for the most efficient response.  This solution is open and ever changing so it is critical that the user interface provides the ability to plug in and organize user interface elements from other areas instead of creating them; otherwise every weekly change requires reprogramming the user interface for the new workflow or data elements.  I foresee a future where a vibrant community of public and proprietary analytical components are able to plug into an open framework; each component providing analytics, data visualizations, and workflow elements that the company can configure to work together to solve their problems.

Next steps

In each of the next articles we will go into greater detail on the design of each conceptual component and begin shifting from the conceptual architecture to guidance on the technical design.  I hope you join me next time as we start with Cybersecurity: all about sensor networks, shifting from point solutions to an integrated analytic solution.

Leave a Reply

Your email address will not be published. Required fields are marked *