REDUCE DATA VOLUMES. IMPROVE DATA QUALITY.
WHAT IS Mezmo TELEMETRY PIPELINE
MEZMO TELEMETRY PIPELINE
Mezmo helps you confidently harness value from your telemetry data. Using Understand, Optimize, and Respond approach, Mezmo Flow leverages AI capabilities to analyze telemetry data sources, identify noisy log patterns, and create a data-optimizing pipeline with a few simple clicks, that routes data to any observability platform to dramatically cut log volumes and improve data quality. When you have an incident, get an in-stream alert or automatically react using Mezmo’s responsive pipelines in incident mode to get you the telemetry data you need to accelerate time to resolution.
Control Data
Control data volume and costs by in as little as 15 minutes by using Mezmo Flow. Mezmo Flow will help with identifying unstructured telemetry data, removing low-value and repetitive data, and using sampling to reduce chatter. Employ intelligent routing rules to send certain data types to low-cost storage.
- Filter: Use the Filter Processor to drop events that may not be meaningful or to reduce the total amount of data forwarded to a subsequent processor or destination.
- Reduce: Take multiple log input events and combine them into a single event based on specified criteria. Use Reduce to combine many events into one over a specified window of time.
- Sample: Send only the events required to understand the data.
- Dedupe: Reduce “chatter” in logs. The overlap of data across fields is the key to having the Dedup Processor work effectively. This processor will emit the first matching record of the set of records that are being compared.
- Route: Intelligently route data to any observability, analytics, or visualization platform.
Transform Data
Increase your data value and quality by transforming and enriching data. Reformat data as needed for compatibility with various end destinations. Scrub sensitive data, or encrypt it to maintain compliance standards.
- Parse: Various parsing options available to create multiple operations such as convert string to integers or parse timestamps.
- Aggregate Metrics: Metric data can have more data points than needed to understand the behavior of a system. Remove excess metrics to reduce storage without sacrificing value.
- Encrypt: Use the Encrypt Processor when sending sensitive log data to storage, for example, when retaining log data containing account names and passwords.
- Event to Metric: Create a new metric within the pipeline from existing events and log messages.
Deliver Insights
With Mezmo you can extract insights before the data reaches high-cost destinations. You can monitor the health of your data pipeline and run various tests before you deploy your solution.
- Monitor the health of pipelines with OOB dashboards.
- Derive metric data from Logs by parsing the log data to extract specific information.
- Count specific events within the log data and use that count to create a metric.
- Use simulation to test your pipelines before you deploy
- Send to Mezmo Log Management for analysis and insights
UNDERSTAND, OPTIMIZE, AND RESPOND
Step-Up Your Observability Game!
01
Filter for Efficiency
Efficiently filter events, eliminating duplicates, extraneous data, and non-insightful information; utilizing drop, sample, or suppress filters for optimal results.
02
Store Cost-Effectively
Send your "just in case" data to low-cost object storage for future retrieval. Low-cost pipelines and efficient retrieval are better than expensive analytics platforms.
03
Improve Data Structure
Optimize events by trimming unnecessary content or fields. Remove meaningless values and reformat to more efficient data structures such as JSON.
04
Find Critical Metrics
Condense logs into summary metrics, aggregating useful statistics. Process logs into numbers to surface previously unseen values or counts. Enhance efficiency and scalability while gaining better insight.
05
Make It Easy
Get the observability answers you need without the toil. Automatically scale the infrastructure and data retention as your requirements grow.
MEZMO PIPELINE INTEGRATIONS
01
SOURCES
Sources provide support for ingesting pipeline log data from several popular sources.
+ Data from object contents in AWS S3 via SQS
+ Events from Azure Event Hub using Azure Kafka Endpoint
+ Events from Kafka-compatible brokers
+ Data from Fluent (Fluentd, Fluent Bit)
+ Datadog Agent
+ Logstash data
+ Detailed data from Mezmo Agent (Otel)
+ Splunk HEC
+ Data via OpenTelemetry OTLP
+ Metrics from Prometheus Remote-Write
02
DESTINATIONS
Send to existing Observability, Analytics, Security, or Long-Term Storage tools such as:
+ Cloud-based storage: AWS S3, GCP Cloud Storage, or Azure Blob Storage
+ Datadog logs + metrics
+ Elasticsearch
+ Kafka
+ Grafana Loki Logs
+ Sumo Logic
+ Mezmo Log Management
+ New Relic Logs + Metrics
+ Prometheus Remote-Write Metrics
+ Splunk HEC Connector
+ Apache Pulsar
+ HTTP Endpoint
+ Honeycomb
+ Amazon SQS/S3