Monday, March 21, 2016

Microservices approach to processing system logs into real time actionable events

Microservices approach to processing system logs into real time actionable events

With the growth in  scale of IT services, software, and infrastructure from industries small to large and local to global, real time monitoring and notification of critical systems and security logs has becoming increasingly important. These logs are not just logs, they are truly events that a business needs to listen to and care about in realtime to be agile and be prepared to handle the challenges that lie in the marketplace.

Given the logs contain vast amounts of data, and logistically it is not possible to inspect every log manually on every network or the system, logging is important but often neglected tool to preemptively react to a crisis. Although there are applications available that consolidate logs into a central place, what is needed is the ability to unlock pertinent events automatically for real time monitoring and notification and make them actionable. Making these system events actionable as they happen depends on your ability to make the right data available at right time and with the right person or the system. This is critical since we live in a world that is increasingly event driven thanks to convergence of systems, devices, people and processes through advancement in internet, mobile, and communications technologies.

But how is this possible without some level of component or perhaps, re-design from the ground up of large distributed or legacy systems?  A Microservices based solution comes to rescue…

A Microservices based solution
With the rapid increase in cloud hosted services and SaaS products over the last few years, there has been a paradigm shift in the application landscape described most clearly by two words - diverse and distributed. What it means is that there is data distributed across your diverse and specialized SaaS applications on various cloud hosted services. To get a view of this distributed data across cloud services there is a need for an integration architecture than cannot be provided by legacy SOA (Service Oriented Architecture). Although the core tenets of SOA are still valid and widely applicable, it has to evolve to support the world of Cloud, SaaS, PaaS and IoT. 

The good news is that we have such an evolution of SOA architecture fit for todays world as “Microservices” architecture. In a nutshell, Microservices are independent, atomic and portable services that can be deployed anywhere (on premise, on cloud and on any system or operating system) with no deployment dependencies. These Microservices do one well defined task very well and can be chained together using a Message Oriented Middleware (MOM) like RoboMQ to create complex business processes. Microservices architecture thus provides a lego approach to building applications which are auto-scalable, future proof, elastic and expandable. You could follow our other blog on this topic to get into the architectural and philosophical detail at Lego approach of building applications.

A typical log generation, acquisition and subsequent preemptive action can be illustrated as shown in Figure 1 below. This from a high-level, is a workflow that can be applied to any IT or technology system, monitoring process, or business application where automated events can be triggered based on logs generated from any system and in any format. 


Figure 1: Event processing using Microservices


If you follow the picture from left to right, we have system events available as log files, database records or simply data streams from applications, systems or devices. These events are captured or acquired by the middleware messaging platform. The events are subsequently processed by chain of Microservices designed for the organization specific business needs. During this processing by Microservices the log events may be reformatted, evaluated against the thresholds or known alert conditions, or fed to a machine learning system to learn and identify threats and issues. At the end of the processing, an action is taken such as making a phone call, sending a SMS or email, feeding the event to a real time analytic engine, or creating a case or a ticket in Salesforce, ServiceNow or Jira.

For the purpose of this article, we will be using the Microservices platform provided by RoboMQAll the tasks referenced above are performed using our connectors and adapters to various enterprise systems and almost all protocols through “ThingsConnect” suite of adapters and connectors. Microservices style of development needs three core components that are provided by RoboMQ. Let me introduce these since they will referred in the following sections:
  1. API Gateway - RoboMQ provides an enhanced multi-protocol API Gateway through its ThingsConnect suite of adapters and connectors, so that you can process events from any system in any protocol or format.
  2. Messaging Fabric - At the core, RoboMQ is a Message Oriented Middleware(MOM) that provides a truly distributed and federated messaging layer, also referred to as a “Hybrid Messaging cloud”.
  3. Microservice framework - We provide Microservices framework and have built our platform from the ground up using this framework.

In this use case, all the components are built as Microservices and the messaging fabric provides the chaining of services over a robust and scalable infrastructure that supports guaranteed and reliable delivery of the log information. The Microservices layer consists of well defined atomic tasks or components that can be assembled to create an event-driven workflow that automatically generates events from any source file or database and outputs to any destination system including the next Microservice in the chain of processing. The following sequence descries the event processing scenario as shown in Figure 2 below:
  1. The first step of event processing are “Listener” Microservices that captures and listen for log events from file, server logs, or API calls. Additionally, it can filter events based on criticality (i.e. Errors, warnings, information). While processing bulk logs as in server log files, listener will emit each log event as an individual message. This allows for on demand scaling and parallel processing of the log events through the processing streams.
  2. The next step in the chain are Microservices called “Executor” which takes the captured event and based on a rules set, determine what action, or alert should be generated and which should the recipient system for the action.
  3. The final step in the process are Microservices, called “Adapter”, that extracts the event content and converts it to the format or protocols for the receiving system. This could be translation to REST call for ServiceNow or Salesforce or an SMTP call for email or an SMS. The adapter takes care of this protocol translation and needed data transformation. There are wide possibilities here using RoboMQ ThingsConnect suite of adapters and connectors that supports Any-to-Any integration. 


Figure 2: Log processing through chaining of Microservices


You can run one or more instances of each of the above three microservices to provide scaling by parallel processing. The good news is that you do not need to do any code or configuration change, just throw the microservices in the mix and they auto load-balance the workload when using RoboMQ

Putting it all together … 
As an example we take typical linux system logs that we all are familiar with. In the picture below is a scenario where the system logs contains all kind of information. Some of the information, shown in red, is critical and needs immediate action and some are simply warnings which could be benign but may point to a low priority corrective action needed over a long run. 


Figure 3: Processing System logs as complex event processing stream


When processed thorough the scheme as suggested in Figure 2, these critical events are captured in real time and they trigger an immediate action, either escalating the incident, or notifying a desired recipient, or logging it to a tracking or case management tool. These are various options possible and can be chosen depending on the business need. You could, for example:
  • Create a case in Salesforce to engage technical support and triage teams to react to an issue or an incident.
  • Generate and SMS alert for specified system administrators to immediately act upon.
  • Log alert as a record to any relational or big data database for historical analysis or to provide an audit trail.
  • Process the event through a machine learning system and then take one of the above actions based on the recommendation from machine learning.

This use case is a very simple but a good example of Complex Event Processing (CEP) which is possible using RoboMQ. To help ease building such workflows, we have created a library of fully dockerized connectors, adapters and utility components providing building blocks of Microservices that can be chained together. These Microservices can be deployed on the cloud, on premise in your data center, or on container management platforms like Google Kubernetes, AWS container engine, or IBM Bluemix with core infrastructure provided by RoboMQ.


If you would like more information on Microservices or RoboMQ in general, please check out our website or send an email to info@robomq.io.

No comments:

Post a Comment