AWS cloudwatch logs datetime_format config combine multiple lines into one line - amazon-cloudwatchlogs

I use aws cloudwatch to collect aws ecs farget task's logs.
The config is in the following pic:
And I get the following logs, it is wired. Some lines with datetime is combined in one log. I think every log starts with datetime should be seen as one single log.

The %L option of awslogs-datetime-format has incorrect documentation. I had the same problem and then looked at the driver source code and %L matches .\d{3} Here`s a source link.
This means it requires a leading . at the start of the pattern. You may change the format like this and it will work.
awslogs-datetime-format: '%Y-%m-%d %H:%M:%S%L'

Related

How to show timestamps for each line in Argo Workflows pods?

I'm trying to figure out how to show a timestamp for each line of STDOUT of an Argo Workflows pod. The init and wait containers by default show a timestamp, but never the main container.
The Argo CLI has a --timestamp flag when viewing logs.
Also the argo-java-client has a logOptionsTimestamps property that also enables timestamps.
However I cannot find a similar option when defining a Workflow in YAML. I've gone through the field reference guide but haven't been able to find something to enable timestamps in the main container.
Does anyone know if this is possible, or how to enable them?
Thanks,
Weldon
The reason init and wait log statements have timestamps is that the Argo executable's logger writes timestamps by default.
The --timestamps option does not cause the containers themselves to log timestamps. It just decorates each log line with a timestamp (kubectl has a similar option).
As far as I know, there's no way to declaratively cause code running in the main container to log timestamps. You'd have to modify the code itself to use a logger which inserts timestamps.

Kubernetes Log Splitting (Stdout/Stderr)

When I call kubectl logs pod_name, I get both the stdout/err combined. Is it possible to specify that I only want stdout or stderr? Likewise I am wondering if it is possible to do so through the k8s rest interface. I've searched for several hours and read through the repository but could not find anything.
Thanks!
No, this is not possible. To my knowlegde, the moment of writing this, kubernetes supports only one logs api endpoint that returns all logs (stdout and stderr combined).
If you want to access them separately you should consider using different logging driver or query logs directly from docker.

GCP stackdriver logging logs format changed in bucket from folder per container to stdout\stderr

i have a question, similar as describe here: GKE kubernetes container stdout logs format changed
in old version of stackdriver i had 1 sink with filter like this:
resource.type=container,
resource.namespace_id=[NAMESPACE_NAME]
resource.pod_id=[POD_NAME]
and logs was stored in bucket pretty well, like this:
logName=projects/[PROJECT-NAME]/logs/[CONTAINER-NAME]
...so i had folders whith logs for each container.
But now i updated my stackdriver logging+monitoring to last version and now i have 2 folders stdout\stderr which contains all logs for all containers!
logName=projects/[PROJECT-NAME]/logs/stdout
logName=projects/[PROJECT-NAME]/logs/stderr
All logs from many containers stored in this single folders! This is pretty uncomfortable =(
I'v read about this in docs: https://cloud.google.com/monitoring/kubernetes-engine/migration#changes_in_log_entry_contents
The logName field might change. Stackdriver Kubernetes Engine Monitoring log entries use stdout or stderr in their log names whereas Legacy Stackdriver used a wider variety of names, including the container name. The container name is still available as a resource label.
...but i can't find solution! Please, help me, how to make container per folder logging, like it was in old version of stackdriver?
Here is a workaround that has been suggested:
Create a different sink for each of your containers filtered by
resource.labels.container_name
Export each sink to a different
bucket
Note: If you configure each separate sink to the same bucket the logs will be combined.
More details at Google Issue Tracker

Filebeat kafka output use filename as key

I want to use filebeat 5.4.0 to ship log to kafka. My logs are all docker container logs, in /var/lib/docker/containers/*/${container_name}.log, or soft link in /var/log/containers/${appname}-${container_name}.log.
I want to save all app logs to one topic in kafka. And my requirements are:
Make sure the log from the same container go to the same partition
in order.
The msg must contains the appname and the container_name where it comes out.
And I'm facing two problems.
How to get log from a soft link?
How to get the appname and container_name from the filename, and set to key of output.kafka?
Beats are supposed to be lightweight, if you want to do more filtering then that is what logstash is for. You can use filebeats+logstash+kafka. Before sending to kafka, use logstash's split filter.
Also you can use 'type' property in filebeats to map the log paths like below
...
paths:
"/var/log/container/${appname}-${container_name}"
document_type: log
output.kafka:
...
key:'%{[type]}'
...

stackdriver gcloud log write throughput

I am looking into gcloud log shell command line, I started with a classic sample:
gcloud beta logging write --payload-type=struct my-test-log "{\"message\": \"My second entry\", \"weather\": \"aaaaa\"}"
It works fine so I checked the throughputwith the following code its works veru slaw (about 2 records a sec) is this the best way to do so?
Here is my sample code
tail -F -q -n0 /root/logs/general/*.log | while read line
do
echo $line
b=`date`
gcloud beta logging write --payload-type=struct my-test-log "{\"message\": \"My second entryi $b\", \"weather\": \"aaaaa\"}"
done
If you assume each command execution takes around 150ms at best, you can only write a handful of entries every second. You can try using the API directly to send the entries in batches. Unfortunately, the command line can currently only write one entry at a time. We will look into adding the capability to write multiple entries at a time.
If you want to stream large number of messages fast, you may want to look into Pub/Sub.