How to get all logs from an ECS cluster - amazon-ecs

Is there some AWS command get to get logs from all services/tasks from an ECS cluster? something like:
aws ecs logs --cluster dev
or
aws ecs describe-clusters --cluster dev logs
for example, there must be some API to retrieve the logs that are shown in the UI here:

No there is no such out of the box option that takes logs from all service based on the cluster, as evey container running in their own space (EC2 instance).
So there can similar option that you can try, but before that, you need to understand the logging mechanism of AWS ECS.
logDriver
The log driver to use for the container. The valid values listed for
this parameter are log drivers that the Amazon ECS container agent can
communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks using the EC2 launch type, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, logentries, syslog, splunk, and awsfirelens.
So if you are running multiple container on same ec2 instance then syslog make sense for you.
Syslog logging driver
The syslog logging driver routes logs to a syslog server. The syslog
protocol uses a raw string as the log message and supports a limited
set of metadata. The syslog message must be formatted in a specific
way to be valid. From a valid message, the receiver can extract the
following information:
But the best approach is to have a single log group against each container. as syslog is not working in case of fargate so better to go with log group per container.

Related

Export logs of Kubernetes cronjob to a path after each run

I currently have a Cronjob that has a job that schedule at some period of time and run in a pattern. I want to export the logs of each pod runs to a file in the path as temp/logs/FILENAME
with the FILENAME to be the timestamp of the run being created. How am I going to do that? Hopefully to provide a solution. If you would need to add a script, then please use python or shell command. Thank you.
According to Kubernetes Logging Architecture:
In a cluster, logs should have a separate storage and lifecycle
independent of nodes, pods, or containers. This concept is called
cluster-level logging.
Cluster-level logging architectures require a separate backend to
store, analyze, and query logs. Kubernetes does not provide a native
storage solution for log data. Instead, there are many logging
solutions that integrate with Kubernetes.
Which brings us to Cluster-level logging architectures:
While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Kubernetes does not provide log aggregation of its own. Therefore, you need a local agent to gather the data and send it to the central log management. See some options below:
Fluentd
ELK Stack
You can find all logs that PODs are generating at /var/log/containers/*.log
on each Kubernetes node. You could work with them manually if you prefer, using simple scripts, but you will have to keep in mind that PODs can run on any node (if not restricted), and nodes may come and go.
Consider sending your logs to an external system like ElasticSearch or Grafana Loki and manage them there.

How do I stream multiple logs to AWS CloudWatch from inside a Docker instance?

I am setting up Debian-based containers via AWS ECS on EC2 instances. The container has a number of logs I want in separate CloudWatch streams.
The "expected" setup is to simply stream stdout to CloudWatch, but that only permits for one stream.
I tried to install the cloudwatch agent, but ran into myriad problems. System has not been booted with systemd as init system (PID 1). Can't operate. being the starting point.
Is this possible?

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

How to Push Kubernetes (EKS) Logs to Cloudwatch logs with separate log streams based on application name

I have a scenario where I need to push application logs running on EKS Cluster to separate cloudwatch log streams. I have followed the below link, which pushes all logs to cloudwatch using fluentd. But the issue is, it pushes logs to a single log stream only.
https://github.com/aws-samples/aws-workshop-for-kubernetes
It also pushes all the logs under /var/lib/docker/container/*.log. How Can I filter this to can only application specific logs?
Collectord now supports AWS CloudWatch Logs (and S3/Athena/Glue). It gives you flexibility to choose to what LogGroup and LogStream you want to forward the data (if the default does work for you).
Installation instructions for CloudWatch
How you can specify LogGroup and LogStream with annotations
Highly recommend to read Setting up comprehensive centralized logging with AWS Services for Kubernetes

Change logging default of Google container Engine

On the google container engine, my cluster ships container stdout and stderr to google cloud logging.
Is there any way that I can change the logging output to be consumed by a syslog server or an external entity?
Google Container Engine gives you two choices for logging: Google Cloud Logging or none. If you don't want to use Google Cloud Logging, you should configure custom logging in your cluster.
There are a couple of ways that you can go about this. You could run a pod per host with your logging agent inside of it and capture logs from any containers that run on the host. This is how Google Container Engine collects logs (using fluentd to send logs to Google Cloud Logging).
You could also configure each of the pods that you want logs from to have a sidecar logging container. This results in many more logging agents running in your system, but gives you the flexibility to customize them for each of your applications.