Change logging default of Google container Engine - kubernetes

On the google container engine, my cluster ships container stdout and stderr to google cloud logging.
Is there any way that I can change the logging output to be consumed by a syslog server or an external entity?

Google Container Engine gives you two choices for logging: Google Cloud Logging or none. If you don't want to use Google Cloud Logging, you should configure custom logging in your cluster.
There are a couple of ways that you can go about this. You could run a pod per host with your logging agent inside of it and capture logs from any containers that run on the host. This is how Google Container Engine collects logs (using fluentd to send logs to Google Cloud Logging).
You could also configure each of the pods that you want logs from to have a sidecar logging container. This results in many more logging agents running in your system, but gives you the flexibility to customize them for each of your applications.

Related

Export logs of Kubernetes cronjob to a path after each run

I currently have a Cronjob that has a job that schedule at some period of time and run in a pattern. I want to export the logs of each pod runs to a file in the path as temp/logs/FILENAME
with the FILENAME to be the timestamp of the run being created. How am I going to do that? Hopefully to provide a solution. If you would need to add a script, then please use python or shell command. Thank you.
According to Kubernetes Logging Architecture:
In a cluster, logs should have a separate storage and lifecycle
independent of nodes, pods, or containers. This concept is called
cluster-level logging.
Cluster-level logging architectures require a separate backend to
store, analyze, and query logs. Kubernetes does not provide a native
storage solution for log data. Instead, there are many logging
solutions that integrate with Kubernetes.
Which brings us to Cluster-level logging architectures:
While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Kubernetes does not provide log aggregation of its own. Therefore, you need a local agent to gather the data and send it to the central log management. See some options below:
Fluentd
ELK Stack
You can find all logs that PODs are generating at /var/log/containers/*.log
on each Kubernetes node. You could work with them manually if you prefer, using simple scripts, but you will have to keep in mind that PODs can run on any node (if not restricted), and nodes may come and go.
Consider sending your logs to an external system like ElasticSearch or Grafana Loki and manage them there.

How to get all logs from an ECS cluster

Is there some AWS command get to get logs from all services/tasks from an ECS cluster? something like:
aws ecs logs --cluster dev
or
aws ecs describe-clusters --cluster dev logs
for example, there must be some API to retrieve the logs that are shown in the UI here:
No there is no such out of the box option that takes logs from all service based on the cluster, as evey container running in their own space (EC2 instance).
So there can similar option that you can try, but before that, you need to understand the logging mechanism of AWS ECS.
logDriver
The log driver to use for the container. The valid values listed for
this parameter are log drivers that the Amazon ECS container agent can
communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks using the EC2 launch type, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, logentries, syslog, splunk, and awsfirelens.
So if you are running multiple container on same ec2 instance then syslog make sense for you.
Syslog logging driver
The syslog logging driver routes logs to a syslog server. The syslog
protocol uses a raw string as the log message and supports a limited
set of metadata. The syslog message must be formatted in a specific
way to be valid. From a valid message, the receiver can extract the
following information:
But the best approach is to have a single log group against each container. as syslog is not working in case of fargate so better to go with log group per container.

Use preinstalled fluentd installation in a K8saaS in the IBM-Cloud?

One K8saaS cluster in the IBM-Cloud runs preinstalled fluentd. May I use it on my own, too?
We think about logging strategy, which is independed from the IBM infrastrukture and we want to save the information inside ES. May I reuse the fluentd installation done by IBM for sending my log information or should I install my own fluentd? If so, am I able to install fluentd on the nodes via kubernetes API and without any access to the nodes themselfes?
The fluentd that is installed and managed by IBM Cloud Kubernetes Service will only connect to the IBM cloud logging service.
There is nothing to stop you installing your own Fluentd as well though to send your logs to your own logging service, either running inside your cluster or outside. This is best done via a daemonset so that it can collect logs from every node in the cluster.

How to Push Kubernetes (EKS) Logs to Cloudwatch logs with separate log streams based on application name

I have a scenario where I need to push application logs running on EKS Cluster to separate cloudwatch log streams. I have followed the below link, which pushes all logs to cloudwatch using fluentd. But the issue is, it pushes logs to a single log stream only.
https://github.com/aws-samples/aws-workshop-for-kubernetes
It also pushes all the logs under /var/lib/docker/container/*.log. How Can I filter this to can only application specific logs?
Collectord now supports AWS CloudWatch Logs (and S3/Athena/Glue). It gives you flexibility to choose to what LogGroup and LogStream you want to forward the data (if the default does work for you).
Installation instructions for CloudWatch
How you can specify LogGroup and LogStream with annotations
Highly recommend to read Setting up comprehensive centralized logging with AWS Services for Kubernetes

k8s-visualizer for Kubernetes in Google Cloud Platform

I want to run k8s-visualizer for Kubernetes in der Google Cloud Platform. Just found how to run it local.
How to run it in the Google Cloud Platform?
The k8s-visualizer is written in a way that it depends on the kubectl proxy and runs all Ajax calls against /api/.... It isn't ready to run on the cluser.
If you want to have it on your cluster, you'd have to fork the existing code and adjust all API calls slightly to hit the apiserver.
Once this is done, wrap everything into a container and deploy it into a Pod along with a service.
A good starting point are the open pull requests
Cheers