Container Logging & DevOps: the Future of Kubernetes Integration

4 MIN READ
MIN READ
TABLE OF CONTENTS
    4 MIN READ
    MIN READ

    I hosted a webinar where I covered why logging is important, how to choose a logging provider. And then shared our experience of setting up logging on Kubernetes containers, the Kubernetes logging framework and the logging best practices we've implemented internally and supported our customers who run Kubernetes in production.LogDNA's ultimate goal is to provide a useful tool for developers to be able to quickly access logs, to get in, get your logs and get back to work. Our UI, UX, speed of search and simplicity in integration is built to solve this one problem. Our Kubernetes, or container logging integration is a prime example of this, with just two kubectl lines, your container logs are sent to LogDNA. There's no configuration, no messy sidecar container that consumes additional resources, dependencies on fluentd.

    Kubernetes Logging Best Practices

    1. Log to stdout and separate errors to stderror: while this process is standard practice for moving to a containerized environment, many apps still log to file. You can take advantage of the Kubernetes logging framework and automatically take those logs to stream it to necessary places.
    2. Separate dev and prod into multiple clusters: this is to help you not accidentally delete a critical production pod by accident. Then you can use use-context kubectl command to switch between the two clusters easily.
    3. Move variables into environment specific ConfigMap: Instead of maintaining multiple YAML files for dev and production environment variables, create a ConfigMap which is a giant YAML file of key/value pairs and take those and separate out the staging and production environment files.
    4. Avoid using sidecars for logging: Generally speaking a pod has one container. A sidecar for logging is a second container in the same pod to log the output from the first container so it takes up unnecessary resources from a pod level. There are some scenarios where you can't avoid using it like if you don't have control of the application and it writes logs to files. Or you want to hide the output from Kubernetes logging framework.
    5. Test locally and ephemerally prior to deployment: Minikube installs on your own laptop, while doesn't mimic everything that kubernetes does, it gives you a good idea of how it will run on production.

    Default Kubernetes Logging Framework

    Kubernetesloggingframework

    The logs are ephemeral: when containers output logs to stdout, the containers automatically dumps the files to /var/log. When the pods are evicted, crashed, deleted or scheduled on a different node, the logs from the container is gone. This is different than logging on traditional servers or virtual machines. When your apps die on your virtual machine, you don't lose the logs until you delete it. In Kubernetes, it cleans up after itself and the logs will not persist. Understanding the ephemeral nature of default logging on Kubernetes is important because it points to a centralized log management solution.

    Logging Solutions for Kubernetes

    Fluentdvslogdna

    Kubernetes logging with Fluentd

    When we first started using Kubernetes, we, like many others started with Fluentd. You can find many guides for setup, even though there might be 30+ steps. It is easy to get started, there are plenty of examples of configs online. Fluentd works well in low volume but the challenge is with higher volume. Scaling becomes challenging, mainly the efforts of scaling ElasticSearch, learning how to properly architect the shards and indices and becoming an expert of ElasticSearch operations.

    Kubernetes logging with LogDNA

    Similar to the fluentd agent, you can install the LogDNA agent to read and stream the log lines from /var/log. Since the agent streams the logs to LogDNA, it doesn't take additional resources in the pods and nodes. You can then focus on scaling your product instead of spending your time scaling the logging infrastructure. The agent runs per node instead of per pod which is much more efficient.We have kept things simple with our kubernetes integration and all you need to do is install our agent using two kubectl commandsWe'll automatically extract and index the Kubernetes metadata (pod name, container name, namespace, node) and auto parse all the common log types without the need to manually identify them like you would using fluentd. You can also set up your custom log lines to be parsed and indexed by us using our custom log parsing interface.Try our Kubernetes integration today with a 14-day free trial. We look forward to hearing your feedback.

    false
    false