how to ship logs files from my springboot pod running on EKS to ELK - kubernetes

I am running my pods on EKS cluster. The containers are based on Spring boot application. How to ship the logs files from spring boot application using filebeats to Elasticsearch on AWS? I do not see any article on this.

If your application logs everything to stdout, the idea is to install Filebeat as a Daemonset on the EKS cluster, so a pod on every node.
Then configure Filebeat mounting the docker folder, so it can read the logs and send the data to ElasticSearch (or Logstash).
Elastic docs: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

Related

Kubernetes Logging Using Volume Mount

I created a python application and wrote logs to a file. Deployed this application in Kubernetes with replica sets. Mounted the logs directory to a shared NFS storage. Is it a good approach to collect logs from Kubernetes?

How to let fluentd to collect logs from a container outside of k8s cluster?

I have an EFK (ElasticSearch, Fluentd, Kibana) being deployed in a Kubernetes cluster. I can get the logs from pods in the cluster.
However, I have a container which is outside of the cluster (at different server; running using Docker), and I want to use Fluentd to collect the logs of this container.
I know the easiest way is to deploy this container inside the current Kubernetes cluster. But due to some design considerations, I have to put this container outside of the Kubernetes cluster.
Is there any way to let the current Fluentd to collect logs from the container which is outside of the Kubernetes cluster? Any setting that I have to do at Fluentd?
Thanks.
In Kubernetes, containerized applications that log to stdout and stderr have their log streams captured and redirected to JSON files on the nodes. The Fluentd Pod will tail these log files, filter log events, transform the log data, and ship it off to the Elasticsearch cluster we deployed earlier.
Log collection problem from docker containers inside the cluster. We will do so by deploying fluentd as DaemonSet inside our k8s cluster.
In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, Kube-proxy, and Docker logs. To see a full list of sources tailed by the Fluentd logging agent, consult the kubernetes.conf file used to configure the logging agent.
Follow this doc for more information.

How to See the Application logs in Pod

We are moving towards Microservice and using K8S for cluster orchestration. We are building infra using Dynatrace and Prometheus server for metrics collection but they are yet NOT in good shape.
Our Java Application on one of the Pod is not working. I want to see the application logs.
How do I access these logs?
Assuming the application logs to stdout/err, kubectl logs -n namespacename podname.

Custom Fluentd Logging

I am migrating from running my containers on a Docker Swarm cluster to Kubernetes running on Google Container Engine. When running on Docker Swarm, I had configured the Docker Engine's logging driver (https://docs.docker.com/engine/admin/logging/overview/) to forward logs in the Fluentd format to a Fluentd container running on the Docker Swarm node with a custom config that would then forward the Docker logs to both an Elasticsearch cluster (running Kibana), as well as an AWS S3 bucket. How do I port this over to my Kubernetes nodes?
I read that I can run my Fluentd container on each Node using a Daemon Set (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), but I cannot find any documentation on configuring the Docker Engine log driver to forward the Docker logs to the Fluentd container, and furthermore, to format the logs in the format that I need.
We used a bit another solution, we are running fluentd as daemonset but docker write logs to the journal and fluentd access them with systemd plugin. https://github.com/reevoo/fluent-plugin-systemd . Also we use fabric8 kubernet metadata plugin - https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
Another approach is to is to use type tail and /var/log/containers/*.log for the path. Look in kubernetes_metadata_filter there are some examples.

Change fluentd config in GKE to move from Stackdriver to ELK

When running a cluster on GKE, the VM image used to build the cluster comes with a fluentd-gcp.yaml file in
/etc/kubernetes/manifests
Consequently this launches one pod fluentd per node on the cluster.
This fluentd pod collects all container logs and forward them to stackdriver based on this configuration
Now I'd like to use the ELK version instead.
How can I do that in GKE?
You need to first disable the built in cluster logging (gcloud container clusters create --no-enable-cloud-logging ...) in your cluster. Then you can run the fluentd image of your choice on all nodes using a DaemonSet.
There isn't a way to change the logging configuration on a running cluster, so unfortunately you'll need to create a new cluster without the gcp fluentd logger running.