See how you can save 70% of the cost by reducing log volume and staying compliant.

Pipeline Module: Event to Metric

4 MIN READ
3 MIN READ
Kai Alvason

5.22.24

Kai is a Senior Technical Editor at Mezmo.
4 MIN READ
3 MIN READ

At the most abstract level, a data pipeline is a series of steps for processing data, where the type of data being processed determines the types and order of the steps. In other words, a data pipeline is an algorithm, and standard data types can be processed in a standard way, just as solving an algebra problem follows a standard order of operations. 

For telemetry data, there are standard log types, such as Apache logs, which will always contain the same data patterns, which can then be processed in standard ways. At Mezmo, we refer to these standard processing units as modules (or, in less serious moments, Pipettes), which can consist of one or more Processors depending on the functionality of the module. 

For example, Apache logs contain a large volume of HTTP response messages, such as Status 200, that can amount to more than 50% of the total data volume. Passing full versions of these events to a monitoring or analysis tool is wasteful in terms of cost, and inefficient in terms of helping SREs derive real information from the telemetry data. What’s informative isn’t that these responses are being generated, but whether there is a sudden decrease in them in relation to other system metrics, which could indicate an incident or other negative system condition. 

For this reason, we always recommend including an Event to Metric module as part of the architecture for a telemetry pipeline to convert voluminous standard messages to meaningful information, and to save significant costs associated with your analysis and observability tools. As shown in this screenshot, the Event to Metric module includes a Route Processor that matches and routes the data to the Event to Metric Processor, which then sends the processed data to an Aggregate Processor, which produces a count of the events that can be viewed as a gauge on the systems dashboard of the observability tool.

Overview of the architecture for a Pipeline that converts 200 events to metrics
From left to right: (1) Apache Logs Source, (2) Route Processor, (3) Event to Metric Processor, (4) Aggregate Processor, (5) Observability Tool

This version of the Event to Metric module represents a basic Monitoring state, but we will be releasing features in the next weeks and months that will enable alerting based on the event data, and the ability to switch to an Incident state that will bypass the Event to Metric module and send full fidelity versions of the event data to the observability and analysis tool. 

If you would like to know more information about the configuration of the Event to Metric module, or to see an interactive version that shows how the data is transformed through each step, check out the topic Pipeline Module: Event to Metric in our Practitioner’s Guide to Data Optimization. You can also reach out to our Technical Services team for an analysis of the standard patterns in your data, and to get our recommendations on how to optimize it for your operational needs. 

false
false