See how you can save 70% of the cost by reducing log volume and staying compliant.

See how you can save 70% of the cost by reducing log volume and staying compliant.

Site Reliability Engineers

Access the right data, in the right format, in the right location, at the right time, for the right price. Intelligently optimize data flowing to high-cost destinations Mezmo Observability Pipeline.

Keep Services Humming

Telemetry data is your lifeblood, and managing it better is key. However, SRE teams cite managing huge data volumes and increasing costs as one of their biggest issues. We power SRE team workflows to help understand, optimize, and respond to their dynamic telemetry data needs. Routing data in the right format to the right teams reduces toil, improves collaboration, and, most importantly, reduces resolution times.

Accelerate Resolution TIME
Meet Your SLOs

Improve observability by boosting the "signal to noise ratio" of your telemetry data and directing it to the appropriate team. Aligning data formats across platforms enhances collaboration and root cause identification, while data enrichment deepens problem comprehension. With a collaborative team armed with superior data, issue resolution becomes more effective.

Benefit

01

Get Deeper Insights

You ensure great digital experiences “24 by 7”. But the data you need to meet your SLOs often hide in terabytes of telemetry data. Mezmo can parse, enrich, and transform that data in motion to help you monitor SLO adherence and make the decisions you need to keep your customers happy and subscribed. See how Service Level Indicator telemetry data can be optimized with a Mezmo Telemetry Pipeline for use with Nobl9 service level management software.

Benefit

02

Eliminate Toil

You have more important things to do than calculating compute or storage requirements or performing open-source code maintenance. Mezmo gets you the observability answers you need without the toil. Automatically scale the infrastructure and data retention as your requirements grow. Stay in control of your consumption, with no hidden surprises.

Benefit

03

WHAT IS Mezmo TELEMETRY PIPELINE

MEZMO TELEMETRY PIPELINE

Mezmo helps you confidently harness value from your telemetry data. Using Understand, Optimize, and Respond approach, Mezmo Flow leverages AI capabilities to analyze telemetry data sources, identify noisy log patterns, and create a data-optimizing pipeline with a few simple clicks, that routes data to any observability platform to dramatically cut log volumes and improve data quality. When you have an incident, get an in-stream alert or automatically react using Mezmo’s responsive pipelines in incident mode to get you the telemetry data you need to accelerate time to resolution.

Control Data

Control data volume and costs by in as little as 15 minutes by using Mezmo Flow. Mezmo Flow will help with identifying unstructured telemetry data, removing low-value and repetitive data, and using sampling to reduce chatter. Employ intelligent routing rules to send certain data types to low-cost storage.

  • Filter: Use the Filter Processor to drop events that may not be meaningful or to reduce the total amount of data forwarded to a subsequent processor or destination.
  • Reduce: Take multiple log input events and combine them into a single event based on specified criteria. Use Reduce to combine many events into one over a specified window of time.
  • Sample: Send only the events required to understand the data.
  • Dedupe: Reduce “chatter” in logs. The overlap of data across fields is the key to having the Dedup Processor work effectively. This processor will emit the first matching record of the set of records that are being compared.
  • Route: Intelligently route data to any observability, analytics, or visualization platform.

Transform Data

Increase your data value and quality by transforming and enriching data. Reformat data as needed for compatibility with various end destinations. Scrub sensitive data, or encrypt it to maintain compliance standards.

  • Parse: Various parsing options available to create multiple operations such as convert string to integers or parse timestamps.
  • Aggregate Metrics: Metric data can have more data points than needed to understand the behavior of a system. Remove excess metrics to reduce storage without sacrificing value.
  • Encrypt: Use the Encrypt Processor when sending sensitive log data to storage, for example, when retaining log data containing account names and passwords.
  • Event to Metric: Create a new metric within the pipeline from existing events and log messages.