Optimize Your Datadog Costs with Mezmo Telemetry Pipeline

Overview
.png)
.jpg)
.png)


Step-Up Your Observability Game!
01
Filter for Efficiency
Efficiently filter events, eliminating duplicates, extraneous data, and non-insightful information; utilizing drop, sample, or suppress filters for optimal results.
02
Store Cost-Effectively
Send your "just in case" data to low-cost object storage for future retrieval. Low-cost pipelines and efficient retrieval are better than expensive analytics platforms.
03
Improve Data Structure
Optimize events by trimming unnecessary content or fields. Remove meaningless values and reformat to more efficient data structures such as JSON.
04
Find Critical Metrics
Condense logs into summary metrics, aggregating useful statistics. Process logs into numbers to surface previously unseen values or counts. Enhance efficiency and scalability while gaining better insight.
05
Make It Easy
Get the observability answers you need without the toil. Automatically scale the infrastructure and data retention as your requirements grow.
Mezmo pipeline integrations

01
SOURCES
Sources provide support for ingesting pipeline log data from several popular sources.
+ Data from object contents in AWS S3 via SQS
+ Events from Azure Event Hub using Azure Kafka Endpoint
+ Events from Kafka-compatible brokers
+ Data from Fluent (Fluentd, Fluent Bit)
+ Datadog Agent
+ Logstash data
+ Detailed data from Mezmo Agent (Otel)
+ Splunk HEC
+ Data via OpenTelemetry OTLP
+ Metrics from Prometheus Remote-Write
02
DESTINATIONS
Send to existing Observability, Analytics, Security, or Long-Term Storage tools such as:
+ Cloud-based storage: AWS S3, GCP Cloud Storage, or Azure Blob Storage
+ Datadog logs + metrics
+ Elasticsearch
+ Kafka
+ Grafana Loki Logs
+ Sumo Logic
+ Mezmo Log Management
+ New Relic Logs + Metrics
+ Prometheus Remote-Write Metrics
+ Splunk HEC Connector
+ Apache Pulsar
+ HTTP Endpoint
+ Honeycomb
+ Amazon SQS/S3