Logging k8s kubectl commands related activities by user profiles in Splunk - kubernetes

Disclaimer: I am neither K8s expert and not K8s Administrator and I have limited knowledge in Splunk logs how to access data using Splunk query. So please ignore if you can't help and DON'T close it without understanding what is the ask and I am happy to clarify. This will help people benefited who is running with same questions ask in future.
+++++++++++++++++++++++++++++++++++
We are using K8s on-Prem and there are tons of namespaces and users access pretty much every namespaces. Somebody accidentally can issue kubectl delete command to delete anything , it could be pod / service , roles or cluster. My objective in this thread is , is there anyway we can trace who is running every kubectl operations ?
I found below link which can help auditing k8s: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
If Audit is enabled in k8s how can we trace that who has executed every kubectl command operation ? as I said I am neither k8s admin but want to know if there is clear path and way to trace this in logs from k8s back to Splunk ?
Our K8s Admin said audit has been setup already but kubectl command with user details are NOT flowing from Rancher / Fluentd to Splunk . Do we need any specific configuration to turned it on ? which K8s Admin needs to set . Any help would be appreciated .
thanks
N.B: - this is open thread from closed one.

You can use log backend. With this config all of your audit logs will be logged in disk where fluentd can collect from. This will create logs in master nodes, so fluentd daemonset should be present on them.

Related

deployed a service on k8s but not showing any pods weven when it failed

I have deployed a k8s service, however its not showing any pods. This is what I see
kubectl get deployments
It should create on the default namespace
kubectl get nodes (this shows me nothing)
How do I troubleshoot a failed deployment. The test-control-plane is the one deployed by kind this is the k8s one I'm using.
kubectl get nodes
If above command is not showing anything which mean there is no Nodes in your cluster so where your workload will run ?
You need to have at least one worker node in K8s cluster so deployment can schedule the POD on it and run the application.
You can check worker node using same command
kubectl get nodes
You can debug more and check the reason of issue further using
kubectl describe deployment <name of your deployment>
To find out what really went wrong, first follow the steps described in Harsh Manvar in his answer. Perhaps obtaining that information can help you find the problem. If not, check the logs of your deployment. Try to list your pods and see which ones did not boot properly, then check their logs.
You can also use the kubectl describe on pods to see in more detail what went wrong. Since you are using kind, I include a list of known errors for you.
You can also see this visual guide on troubleshooting Kubernetes deployments and 5 Tips for Troubleshooting Kubernetes Deployments.

Kubernetes - keeping the execution logs of a pod

I'm trying to keep the execution logs of containers in Kubernetes.
I added in my cronjob yaml the successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 5 in order to see the execution history, but when I try to view the logs of the pods I get this error
I assume it is because the pods have been deleted because when I go to a running pod I can see the logs.
So is there a way of keeping the logs in this part of Kubernetes or is there something that I have to setup in order to have this functionality?
Sorry if the question have been asked but I didn't really find something and I'm new to Kubernetes.
Thanks for the replies.
Looking at this problem in a bigger picture it's generally a good idea to have your logs stored via logging agents or directly pushed into an external service as per the official documentation.
Taking advantage of Kubernetes logging architecture explained here you can also try to fetch the logs directly from the log-rotate files in the node hosting the pods. Please note that this option might depend on the specific Kubernetes implementation as log files might be deleted when the pod eviction is triggered.

Logging application logs in DataDog

Using datadog official docs, I am able to print the K8s stdout/stderr logs in DataDog UI, my motive is to print the app logs which are generated by spring boot application at a certain location in my pod.
Configurations done in cluster :
Created ServiceAccount in my cluster along with cluster role and cluster role binding
Created K8s secret to hold DataDog API key
Deployed the DataDog Agent as daemonset in all nodes
Configurations done in App :
Download datadog.jar and instrument it along with my app execution
Exposed ports 8125 and 8126
Added environment tags DD_TRACE_SPAN_TAGS, DD_TRACE_GLOBAL_TAGS in deployment file
Changed pattern in logback.xml
Added logs config in deployment file
Added env tags in deployment file
After doing above configurations I am able to log stdout/stderr logs where as I wanted to log application logs in datadog UI
If someone has done this please let me know what am I missing here.
If required, I can share the configurations as well. Thanks in advance
When installing Datadog in your K8s Cluster, you install a Node Logging Agent as a Daemonset with various volume mounts on the hosting nodes. Among other things, this gives Datadog access to the Pod logs at /var/log/pods and the container logs at /var/lib/docker/containers.
Kubernetes and the underlying Docker engine will only include output from stdout and stderror in those two locations (see here for more information). Everything that is written by containers to log files residing inside the containers, will be invisible to K8s, unless more configuration is applied to extract that data, e.g. by applying the side care container pattern.
So, to get things working in your setup, configure logback to log to stdout rather than /var/app/logs/myapp.log
Also, if you don't use APM there is no need to instrument your code with the datadog.jar and do all that tracing setup (setting up ports etc).

how to send kubectl logs output over mail in azure devops

I have an azuredevops build job to get the log of a deployment pod.
command: kubectl logs deployment/myapp
I am getting the output in the summary page of azure devops pipeline, but the same I want to send a team with a log as an attachment. I am not getting any option in azure devops for that
Basically, your k8s log (pods) will gone after the pods has been terminated (although you can somehow keep it for a little while). For debug purpose or any other purpose you want, you need to Centralized logging your k8s log (use some tools: filebeat, fluentd, fluent-bit to forward your k8s log to elasticsearch).
EX: Some software (tools) for Centralized logging Elasticsearch, Graylog, ...
https://www.elastic.co/fr/what-is/elk-stack
And then you can save, export, analyze your log ... You can do anythings you want with your stored k8s log.
Hope this may help you, guy!
Edit: I use GCP as cloud solution and in GCP, by default, they will use fluentd to forward your k8s log to store in Logging. And the Logging has feature Export, I think you can search somethings similar to Logging in your cloud solution: Azure

How to access the Kubernetes API in Go and run kubectl commands

I want to access my Kubernetes cluster API in Go to run kubectl command to get available namespaces in my k8s cluster which is running on google cloud.
My sole purpose is to get namespaces available in my cluster by running kubectl command: kindly let me know if there is any alternative.
You can start with kubernetes/client-go, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)
It includes a NamespaceLister, which helps list Namespaces.
See "Building stuff with the Kubernetes API — Using Go" from Vladimir Vivien
Michael Hausenblas (Developer Advocate at Red Hat) proposes in the comments documentations with using-client-go.cloudnative.sh
A versioned collection of snippets showing how to use client-go.