How do I stream multiple logs to AWS CloudWatch from inside a Docker instance? - amazon-ecs

I am setting up Debian-based containers via AWS ECS on EC2 instances. The container has a number of logs I want in separate CloudWatch streams.
The "expected" setup is to simply stream stdout to CloudWatch, but that only permits for one stream.
I tried to install the cloudwatch agent, but ran into myriad problems. System has not been booted with systemd as init system (PID 1). Can't operate. being the starting point.
Is this possible?

Related

AWS EKS - Dead container cleanup

I am using Terraform to create infrastructure on AWS environment. Out of many services, we are also creating AWS EKS using terraform-aws-modules/eks/aws module. The EKS is primarily used for spinning dynamic containers to handle asynchronous job execution. Once a given task is completed the container releases resources and terminates.
What I have noticed is that, the dead containers lying on the EKS cluster forever. This is causing too many dead containers just sitting on EKS and consuming storage. I came across few blogs which mention that Kubernetes has garbage collection process, but none describes how it can be specified using Terraform or explicitly for AWS EKS.
Hence I am looking for a solution, which will help to specify garbage collection policy for dead containers on AWS EKS. If not achievable via Terraform, I am ok with using kubectl with AWS EKS.
These two kubelet flags will cause the node to clean up docker images when the filesystem reaches those percentages. https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-lifecycle
--image-gc-high-threshold="85"
--image-gc-low-threshold="80"
But you also probably want to set --maximum-dead-containers 1 so that running multiple (same) images doesn't leave dead containers around.
In EKS you can add these flags to the UserData section of your EC2 instance/Autoscaling group.
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint ..... --kubelet-extra-args '<here>'

How to record the linux commands executed inside a Kubernetes Container?

For auditing purposes, if we need to record all the linux commands that have been executed inside a Kubernetes container then how can we do it?
this is possible using eBPF, there are a few kubernetes tools that can do session auditing, one of them is called teleport, which acts as bastion host for services and have capabilites to record commands that are run on pods shell/bash/ash etc...
https://goteleport.com/

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Gather resource usage by process in a kubernetes cluster

I am searching for a tool similar to Prometheus + Grafana that gather and record resource usage especially memory usage by process-ID or process-name.
We have two components that are running different processes and they have memory leak and I want to find which process is leaking.
This is from Weave Scope and it shows all the processes of each pod and their resource usage but it is just live, I want something similar but storing it over time like a Prometheus graph.
There is a solution where you can monitor it on a container level based on Zabbix.
Dockbix Agent XXL is an agent for Zabbix capable to monitor all Docker containers on your host.
You need to deploy it on all nodes and it will collect data of your containers and sent it to your Zabbix Server.
No classic rpm/deb package installation or Zabbix module compilation.
Just start the dockbix-agent-xxl container and Docker container
metrics will be collected from the Docker daemon API or cgroups.

How to get all logs from an ECS cluster

Is there some AWS command get to get logs from all services/tasks from an ECS cluster? something like:
aws ecs logs --cluster dev
or
aws ecs describe-clusters --cluster dev logs
for example, there must be some API to retrieve the logs that are shown in the UI here:
No there is no such out of the box option that takes logs from all service based on the cluster, as evey container running in their own space (EC2 instance).
So there can similar option that you can try, but before that, you need to understand the logging mechanism of AWS ECS.
logDriver
The log driver to use for the container. The valid values listed for
this parameter are log drivers that the Amazon ECS container agent can
communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks using the EC2 launch type, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, logentries, syslog, splunk, and awsfirelens.
So if you are running multiple container on same ec2 instance then syslog make sense for you.
Syslog logging driver
The syslog logging driver routes logs to a syslog server. The syslog
protocol uses a raw string as the log message and supports a limited
set of metadata. The syslog message must be formatted in a specific
way to be valid. From a valid message, the receiver can extract the
following information:
But the best approach is to have a single log group against each container. as syslog is not working in case of fargate so better to go with log group per container.