I have an azuredevops build job to get the log of a deployment pod.
command: kubectl logs deployment/myapp
I am getting the output in the summary page of azure devops pipeline, but the same I want to send a team with a log as an attachment. I am not getting any option in azure devops for that
Basically, your k8s log (pods) will gone after the pods has been terminated (although you can somehow keep it for a little while). For debug purpose or any other purpose you want, you need to Centralized logging your k8s log (use some tools: filebeat, fluentd, fluent-bit to forward your k8s log to elasticsearch).
EX: Some software (tools) for Centralized logging Elasticsearch, Graylog, ...
https://www.elastic.co/fr/what-is/elk-stack
And then you can save, export, analyze your log ... You can do anythings you want with your stored k8s log.
Hope this may help you, guy!
Edit: I use GCP as cloud solution and in GCP, by default, they will use fluentd to forward your k8s log to store in Logging. And the Logging has feature Export, I think you can search somethings similar to Logging in your cloud solution: Azure
Related
Is there an dynamic way to pull log data from inside my containers?
All of my searches are returning that Azure Logs/Azure Sentinel can read data about AKS relative to the containers as they exist in K8s (online, running, failed, etc.) but not the actual in-container logs. Examples of results asking for this:
https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-log-query
https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-livedata-overview
https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md#enable-monitoring-of-a-new-azure-kubernetes-service-aks-cluster
...all of these provide documentation on monitoring containers (as they live in K8s) but not the app-level logs in the containers...
Is anyone aware of a technology or capability for Azure Logs/Azure Sentinel to consume in-container, on-disk container logs (e.g. inside the container: /var/log, /var/application/logs, etc.)?
Thanks!
Assuming you're referring to linux containers. You only need to have have the OMS agent enabled and pointing to the right workspace and this gets the logs streamed over easily.
The ContainerLog table which would show you the same thing as kubectl logs <pod>. Anything that's sent to stdout and stderr from your container should be available in the Log Analytics Workspace. So if these are not being sent to either, you could just write a small script as part of your container, that would send those logs to stdout.
Here's how I'm able to get SMTP logs from my container:
I am new here, I tried to search for the topic before I post here, this may have been discussed before, please let me know before being to hash on me :)
In my project, after performing some changes on either the DevOps tool sets or infrastructures, we always do some manual sanity test, this normally includes:
Building a new image and update the helm chart
Push the image to Artifactory and perform a "helm update", and see it it runs.
I want to automate the whole thing, and try to get advice from the community, here's some requirement:
Validate Jenkins agent being able to talk to cluster ( I can do this with kubectl get all -n <some_namespace_jenkins_user_has_access_to)
Validate the cluster has access to Github (let's say I am using Argo CD to sync yamls)
Validate the cluster has access to Artifactory and able to pull image ( I don't want to build a new image with new tag and update helm chart, so that to force to cluster to pull new image)
All of the above can be done in command line (so that I can implement using Jenkins groovy)
Any suggestion is welcome.
Thanks guys
Your best bet is probably a combination of custom Jenkins scripts (i.e. running kubectl in Jenkins) and some in-cluster checks (e.g. using kuberhealthy).
So, when your Jenkins pipeline is triggered, it could do the following:
Check connectivity to the cluster
Build and push an image, etc.
Trigger in-cluster checks for testing if the cluster has access to GitHub and Artifactory, e.g. by launching a custom Job in the cluster, or creating a KuberhealthyCheck custom resource if you use kuberhealthy
During all this, the Jenkins pipeline writes the results of its test as metrics to a Pushgateway which is scraped by your Prometheus. The in-cluster checks also push their results as metrics to the Pushgateway, or expose them via kuberhealthy, if you decide to use it. In the end, you should have the results of all checks in the same Prometheus instance where you can react on them, e.g. creating Prometheus alerts or Grafana dashboards.
Disclaimer: I am neither K8s expert and not K8s Administrator and I have limited knowledge in Splunk logs how to access data using Splunk query. So please ignore if you can't help and DON'T close it without understanding what is the ask and I am happy to clarify. This will help people benefited who is running with same questions ask in future.
+++++++++++++++++++++++++++++++++++
We are using K8s on-Prem and there are tons of namespaces and users access pretty much every namespaces. Somebody accidentally can issue kubectl delete command to delete anything , it could be pod / service , roles or cluster. My objective in this thread is , is there anyway we can trace who is running every kubectl operations ?
I found below link which can help auditing k8s: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
If Audit is enabled in k8s how can we trace that who has executed every kubectl command operation ? as I said I am neither k8s admin but want to know if there is clear path and way to trace this in logs from k8s back to Splunk ?
Our K8s Admin said audit has been setup already but kubectl command with user details are NOT flowing from Rancher / Fluentd to Splunk . Do we need any specific configuration to turned it on ? which K8s Admin needs to set . Any help would be appreciated .
thanks
N.B: - this is open thread from closed one.
You can use log backend. With this config all of your audit logs will be logged in disk where fluentd can collect from. This will create logs in master nodes, so fluentd daemonset should be present on them.
I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".
As the team gets more comfortable with the Google Cloud Platform and kubernetes, then the ability to track what changes are being applied to the environment gets more important. We're using kubectl apply yaml files (mostly deployments, services, and configmaps). Is there a way to see what changes are being applied via kubectl?
You can use kubernetes audits to do what you need.
If you're using GKE with a cluster version > 1.8.3 audit logging is available by default in stackdriver logging.
https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging
You could also read these logs using fluentd if you're not using GKE, by specifying the log dir in fluentd config.