Send Kubernetes pod's logs to Splunk - kubernetes

I am using Amazon EKS and I have a server (consider it as X ) which is connected to the control node using kubectl.
I am able to get the pod logs from the server X by running the following command.
kubectl logs -f podname -n=namespace
Now my goal is to send these pod logs to Splunk for which I am using splunk-connect-for-kubernetes
But as per the configurations of values.yaml file, kubernetes logs are forwarded to the Splunk instead of the pod logs.
I would specifically like to send the pod logs i.e. my application logs to the Splunk. Is there any way to achieve this?

One of the option you have is to make use of fluentd, fluentbit combination to read and send to splunk.

Related

How do you retrieve pod logs by labelSelector when using the k8s HTTP API?

I would like to collect the logs from one or more related pods using a labelSelector and the kubernetes HTTP API. However, I don't see any way to do this without first knowing all the pods names, e.g.
{{baseUrl}}/api/v1/namespaces/:namespace/pods/:name/log?container=cillum &follow=true&insecureSkipTLSVerifyBackend=true&limitBytes=-94468552&pretty=cillum &previous=true&sinceSeconds=-94468552&tailLines=-94468552&timestamps=true
Is this possible, or should I mount a container with kubectl and use that to get the logs I want?
I can get the logs using kubectl like so:
kubectl logs -l job=myjob -n test -c main
I would assume there is a similar way to retrieve logs using the labelSelector using the API.

How to let fluentd to collect logs from a container outside of k8s cluster?

I have an EFK (ElasticSearch, Fluentd, Kibana) being deployed in a Kubernetes cluster. I can get the logs from pods in the cluster.
However, I have a container which is outside of the cluster (at different server; running using Docker), and I want to use Fluentd to collect the logs of this container.
I know the easiest way is to deploy this container inside the current Kubernetes cluster. But due to some design considerations, I have to put this container outside of the Kubernetes cluster.
Is there any way to let the current Fluentd to collect logs from the container which is outside of the Kubernetes cluster? Any setting that I have to do at Fluentd?
Thanks.
In Kubernetes, containerized applications that log to stdout and stderr have their log streams captured and redirected to JSON files on the nodes. The Fluentd Pod will tail these log files, filter log events, transform the log data, and ship it off to the Elasticsearch cluster we deployed earlier.
Log collection problem from docker containers inside the cluster. We will do so by deploying fluentd as DaemonSet inside our k8s cluster.
In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, Kube-proxy, and Docker logs. To see a full list of sources tailed by the Fluentd logging agent, consult the kubernetes.conf file used to configure the logging agent.
Follow this doc for more information.

How can I sniff all DNS records from kuberenetes?

I wish to sniff and extract all DNS records from kubernetes: clientIP,serverIP,date,QueryType etc...
I had set up a kuberenetes service.
It is online and running. There I created several containerized micro-services that generate DNS queries (HTTP requests to external addresses). How can I see sniff it ? Is there a way to extract logs with DNS records ?
Given that you use CoreDNS as your cluster DNS service you can configure it to log queries, errors etc. to stdout. CoreDNS have been available as an alternative to kube-dns since k8s version 1.11, so if you're running a cluster of version >1.11 there's a good chance that you're using CoreDNS.
The CoreDNS service usually™️ lives in the kube-system namespace and can be reconfigured using the provided ConfigMap.
Example on how to log everything to stdout, taken from the README:
. {
...
log
...
}
When you've reconfigured CoreDNS you can check the Pod logs with:
kubectl logs -n kube-system <POD NAME>
I have successfully extracted DNS logs , using answer above. My new problem is that I can't see resolution data, i.e. RRDATA, such as resolved IP or other response info?

How to See the Application logs in Pod

We are moving towards Microservice and using K8S for cluster orchestration. We are building infra using Dynatrace and Prometheus server for metrics collection but they are yet NOT in good shape.
Our Java Application on one of the Pod is not working. I want to see the application logs.
How do I access these logs?
Assuming the application logs to stdout/err, kubectl logs -n namespacename podname.

Can't shut down influxDB in Kubernetes

I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.
The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:
kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
Then if I say:
kubectl get pods --namespace=kube-system
I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active.
You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh