Custom Fluentd Logging - kubernetes

I am migrating from running my containers on a Docker Swarm cluster to Kubernetes running on Google Container Engine. When running on Docker Swarm, I had configured the Docker Engine's logging driver (https://docs.docker.com/engine/admin/logging/overview/) to forward logs in the Fluentd format to a Fluentd container running on the Docker Swarm node with a custom config that would then forward the Docker logs to both an Elasticsearch cluster (running Kibana), as well as an AWS S3 bucket. How do I port this over to my Kubernetes nodes?
I read that I can run my Fluentd container on each Node using a Daemon Set (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), but I cannot find any documentation on configuring the Docker Engine log driver to forward the Docker logs to the Fluentd container, and furthermore, to format the logs in the format that I need.

We used a bit another solution, we are running fluentd as daemonset but docker write logs to the journal and fluentd access them with systemd plugin. https://github.com/reevoo/fluent-plugin-systemd . Also we use fabric8 kubernet metadata plugin - https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
Another approach is to is to use type tail and /var/log/containers/*.log for the path. Look in kubernetes_metadata_filter there are some examples.

Related

how to ship logs files from my springboot pod running on EKS to ELK

I am running my pods on EKS cluster. The containers are based on Spring boot application. How to ship the logs files from spring boot application using filebeats to Elasticsearch on AWS? I do not see any article on this.
If your application logs everything to stdout, the idea is to install Filebeat as a Daemonset on the EKS cluster, so a pod on every node.
Then configure Filebeat mounting the docker folder, so it can read the logs and send the data to ElasticSearch (or Logstash).
Elastic docs: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

How to let fluentd to collect logs from a container outside of k8s cluster?

I have an EFK (ElasticSearch, Fluentd, Kibana) being deployed in a Kubernetes cluster. I can get the logs from pods in the cluster.
However, I have a container which is outside of the cluster (at different server; running using Docker), and I want to use Fluentd to collect the logs of this container.
I know the easiest way is to deploy this container inside the current Kubernetes cluster. But due to some design considerations, I have to put this container outside of the Kubernetes cluster.
Is there any way to let the current Fluentd to collect logs from the container which is outside of the Kubernetes cluster? Any setting that I have to do at Fluentd?
Thanks.
In Kubernetes, containerized applications that log to stdout and stderr have their log streams captured and redirected to JSON files on the nodes. The Fluentd Pod will tail these log files, filter log events, transform the log data, and ship it off to the Elasticsearch cluster we deployed earlier.
Log collection problem from docker containers inside the cluster. We will do so by deploying fluentd as DaemonSet inside our k8s cluster.
In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, Kube-proxy, and Docker logs. To see a full list of sources tailed by the Fluentd logging agent, consult the kubernetes.conf file used to configure the logging agent.
Follow this doc for more information.

Kube-proxy was not found in my rancher cluster

My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy
For me, it is really strange to have a cluster without kubeproxy
My setup for rancher cluster is below:
Cluster Provider: RKE
Provision and Provision : Use existing nodes and create a cluster using RKE
Network Plugin : canal
Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.
Finally, I find my kibe-proxy. It is process of host not docker container.
In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.
root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16

How to use kubectl command for kubernetes implemented in rancher run with docker?

I built a rancher using docker on server 1.
I created and added a kubernetes cluster on server 2, and I wanted to access the kubernetes with the kubectl command on server 2 local, but localhost:8080 error is displayed.
How can I apply kubectl command to kubernetes configured with docker rancher locally?
I fixed that issue modifying the kube config file.
The kubeconfig file can be checked by entering the rancher
The file to be modified is ~/.kube/config

Change fluentd config in GKE to move from Stackdriver to ELK

When running a cluster on GKE, the VM image used to build the cluster comes with a fluentd-gcp.yaml file in
/etc/kubernetes/manifests
Consequently this launches one pod fluentd per node on the cluster.
This fluentd pod collects all container logs and forward them to stackdriver based on this configuration
Now I'd like to use the ELK version instead.
How can I do that in GKE?
You need to first disable the built in cluster logging (gcloud container clusters create --no-enable-cloud-logging ...) in your cluster. Then you can run the fluentd image of your choice on all nodes using a DaemonSet.
There isn't a way to change the logging configuration on a running cluster, so unfortunately you'll need to create a new cluster without the gcp fluentd logger running.