How to send Haproxy container logs to multiple splunks - haproxy

I have haproxy runnig in a container. While reading many documentations i came to know we can forward these logs by using docker splunk driver to splunk enterprise. The other approach is to send log to a syslog server where we can collect these logs through splunk forwarder.
Question: Is there any other approach that we can use without creating a third container(syslog server etc)? can splunk driver send the logs to multiple splunk?
Can splunk forwarder directly read logs from haproxy container?
I am trying this to avoid creating a third container. First is haproxy container second can be a splunk forwarder container that can send logs to multiple splunks

Related

How to push Nginx Logs to mongodb?

I need to do this weird scenario and I need to have a better way of doing it.
I have the following setup running as docker containers
A simple python web server
An nginx reverse proxy configured to route traffic to the above python server
I have setup docker volumes to mount the nginx logs to a host path
And everytime I access my simple web page, it produces the usual nginx access logs.
My requirement is:
access to the page should result in the creation of a new document in the MongoDB collection
My problem is, how can I get notified when nginx logs a new log stream so I can send the data to mongodb
My workaround: Since I have mounted the nginx logs to a host path, I can have some kind of a python code running and listening to the file in the host path where the nginx logs are being mounted and when every-time a modification happens, sends the modified content to mongodb. But this approach is really unreliable. Is there a better way of doing this at container level?
Thank you!

Apache Druid failing to connect to zookeeper, Apache Druid are deployed as docker image in one container

I am trying to deploy Apache -Druid in docker container. Image is built successfully. All the services including zookeeper are starting normally when Docker Image of Apache-druid is deployed.
Here is my setup, I am deploying Druid docker image on Docker remote host, it using Docker swarm internally. Ihave configured different container name, hostname for each service of Apache Druid. I have configured external network, I found out internally swarm is initiating those service on different hosts. I have configured "link" as zookeeper for Druid services and vice versa.
But, middle-magaer, co-ordinator and Broker are failing to connect to Zookeeper. Following is the error:
org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/IP Address:2181. Will not attempt to authenticate using SASL (unknown error) 2020-03-19T22:04:05,673 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/IP Address:2181.: Connection refused
So I have different services running on Docker network, on different
nodes(Docker on Linux).
These services are part of Apache Druid like middle manager, broker,router etc. These services are part on one single docker
compose file.
Services start but then not able to connect to
zookeeper which is part of Apache Druid package. Found out from my
infra team that these services are launched on different nodes within
network.
I have used defined external network. Also, I am defining
links . How do I configure services to talk to each other. Here is my
docker compose.
Here is my docker-compose file in comment below
Request inputs.
Thanks and Regards, Shubhada
I have fixed this issue y setting druid host to gateway.docker.internal

How to get all logs from an ECS cluster

Is there some AWS command get to get logs from all services/tasks from an ECS cluster? something like:
aws ecs logs --cluster dev
or
aws ecs describe-clusters --cluster dev logs
for example, there must be some API to retrieve the logs that are shown in the UI here:
No there is no such out of the box option that takes logs from all service based on the cluster, as evey container running in their own space (EC2 instance).
So there can similar option that you can try, but before that, you need to understand the logging mechanism of AWS ECS.
logDriver
The log driver to use for the container. The valid values listed for
this parameter are log drivers that the Amazon ECS container agent can
communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks using the EC2 launch type, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, logentries, syslog, splunk, and awsfirelens.
So if you are running multiple container on same ec2 instance then syslog make sense for you.
Syslog logging driver
The syslog logging driver routes logs to a syslog server. The syslog
protocol uses a raw string as the log message and supports a limited
set of metadata. The syslog message must be formatted in a specific
way to be valid. From a valid message, the receiver can extract the
following information:
But the best approach is to have a single log group against each container. as syslog is not working in case of fargate so better to go with log group per container.

How to Push Kubernetes (EKS) Logs to Cloudwatch logs with separate log streams based on application name

I have a scenario where I need to push application logs running on EKS Cluster to separate cloudwatch log streams. I have followed the below link, which pushes all logs to cloudwatch using fluentd. But the issue is, it pushes logs to a single log stream only.
https://github.com/aws-samples/aws-workshop-for-kubernetes
It also pushes all the logs under /var/lib/docker/container/*.log. How Can I filter this to can only application specific logs?
Collectord now supports AWS CloudWatch Logs (and S3/Athena/Glue). It gives you flexibility to choose to what LogGroup and LogStream you want to forward the data (if the default does work for you).
Installation instructions for CloudWatch
How you can specify LogGroup and LogStream with annotations
Highly recommend to read Setting up comprehensive centralized logging with AWS Services for Kubernetes

Change logging default of Google container Engine

On the google container engine, my cluster ships container stdout and stderr to google cloud logging.
Is there any way that I can change the logging output to be consumed by a syslog server or an external entity?
Google Container Engine gives you two choices for logging: Google Cloud Logging or none. If you don't want to use Google Cloud Logging, you should configure custom logging in your cluster.
There are a couple of ways that you can go about this. You could run a pod per host with your logging agent inside of it and capture logs from any containers that run on the host. This is how Google Container Engine collects logs (using fluentd to send logs to Google Cloud Logging).
You could also configure each of the pods that you want logs from to have a sidecar logging container. This results in many more logging agents running in your system, but gives you the flexibility to customize them for each of your applications.