I am using the official kubernetes dashboard in Version kubernetesui/dashboard:v2.4.0 to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?
If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use kubectl logs in order to see whats going on.
Is there any way to configure the dashboard in a way so that more lines of logs get displayed?
AFAIK, it is not possible with kubernetesui/dashboard:v2.4.0. On the list of dashboard arguments that allow for customization, there is no option to change the amount of logs displayed.
As a workaround you can use Prometheus + Grafana combination or ELK kibana as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as skooner (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.
If anyone is interested: As the feature that i was looking for does not exist yet, i have submitted a feature request in GitHub. You can see it here: https://github.com/kubernetes/dashboard/issues/6700
Related
We have constantly issues with our OpenShift Deployments. Credentials are missing suddenly (or suddenly we have the wrong credentials configured), deployments are scaled up and down suddenly etc.
Nobody of the team is aware of anything he did. However I am quite sure that this happens unknowingly from my recent experiences.
Is there any way to check the history of modifications to a resource? E.g. the last "oc/kubectl apply -f" - optimally with the contents that were modified and the user?
For a one off issue, you can also look at the replicaSets present in that namespace and examine them for differences. Depending on how much history you keep it may have already been lost, if it was present to begin with.
Try:
kubectl get rs -n my-namespace
Or, dealing with DeploymentConfigs, replicaControllers:
oc get rc -n my-namespace
For credentials, assuming those are in a secret and not the deployment itself, you wouldn't have that history without going to audit logs.
You need to configure and enable audit log, checkout the oc manual here.
In addition to logging metadata for all requests, logs request bodies
for every read and write request to the API servers...
K8s offers only scant functionality regarding tracking changes. Most prominently, I would look at kubectl rollout history for Deployments, Daemonsets and StatefulSets. Still, this will only tell you when and what was changes, but not who did it.
Openshift does not seem to offer much on top, since audit logging is cumbersome to configure and analyze.
With a problem like yours, the best remedy I see would be to revoke direct production access to K8s by the team and mandate changes to be rolled out via pipeline. That way you can use Git to track who did what.
I'm trying to keep the execution logs of containers in Kubernetes.
I added in my cronjob yaml the successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 5 in order to see the execution history, but when I try to view the logs of the pods I get this error
I assume it is because the pods have been deleted because when I go to a running pod I can see the logs.
So is there a way of keeping the logs in this part of Kubernetes or is there something that I have to setup in order to have this functionality?
Sorry if the question have been asked but I didn't really find something and I'm new to Kubernetes.
Thanks for the replies.
Looking at this problem in a bigger picture it's generally a good idea to have your logs stored via logging agents or directly pushed into an external service as per the official documentation.
Taking advantage of Kubernetes logging architecture explained here you can also try to fetch the logs directly from the log-rotate files in the node hosting the pods. Please note that this option might depend on the specific Kubernetes implementation as log files might be deleted when the pod eviction is triggered.
Is there a way that I can get release logs for a particular K8s release within my K8s cluster as the replica-sets related to that deployment is no longer serving pods?
For an example kubectl rollout history deployment/pod1-dep would result
1
2 <- failed deploy
3 <- Latest deployment successful
If I want to pick the logs related to events in 2, would it be a possible task, or is there a way that we can such functionality with this.
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As David Maze rightly suggested in his comment above:
Once a pod is deleted, its logs are gone with it. If you have some
sort of external log collector that will generally keep historical
logs for you, but you needed to have set that up before you attempted
the update.
So the answer to your particular question is: no, you can't get such logs once those particular pods are deleted.
I have deployed a service using Cloud run on gke which uses Knative as an abstraction over k8s. The default MaxRevisionTimeoutSeconds is set to 600s in the knative default config but according to this PR this is customizable.
I couldn't find anything in the official Knative documentation, can anybody help me out here?
UPDATE:
After digging a bit more in knative source code and documentation. It looks like that the MaxRevisionTimeoutSeconds is defined in resource=ConfigMap/config-defaults. So have to update it with custom value.
From this it looks like we can use something called as operator to modify the ConfigMap resource but it did not work probably because gcp's does not use operator to install Knative components. Anyways I went on to install the operator and then used resource=knativeserving to overwrite the config-defaults. But this also did not work when I tried re-deploying service.
The next solution is to directly edit the config-defaults using kubectl edit. I even tried doing this but encountered weird behavior. After editing the YAML file when I used kubectl describe to check the changed value, it sometimes shows the modified value, sometimes shows the old value, and sometimes doesn't even show that particular key-value pair in the YAML. Also, it doesn't work when trying to re-deploy the service after doing this edit.
If anyone can help me with this, it would be really great.
MaxRevisionTimeoutSeconds is a cluster-global setting which enforces the max value for TimeoutSeconds on each Revision. This value exists so that cluster administrators can set upper bounds on the amount of time a single HTTP request can be in the system. Knowing an upper bound can be useful when configuring graceful shutdown settings on the HTTP routing components to prevent dropped requests during upgrades.
It's possible that Cloud Run on GKE has overridden these configurations so that they can upgrade the underlying Istio and Knative components on a predictable schedule. (If you have a 10% upgrade budget and it takes 10m to drain a component, your minimum upgrade time is probably around 110m, taking into account additional scheduling / image fetch / startup time.)
I've created a deployment like this:
kubectl run my-app --image=ecr.us-east-1.amazonaws.com/my-app:v1 -l name=my-app --replicas=1
Now I goto the Kubernetes Dashboard:
https://172.0.0.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
But I dont see my-app listed here.
Is it possible to use the Kubernetes Dashboard to view deployments? I'd like to use the dashboard to do things like view the deployments mem/cpu usage, check logs, etc
Kubernetes Dashboard is pretty limited at the moment, and only supports ReplicationControllers. If you create a ReplicationController then you will be able to see the Pods connected to it, check their memory and CPU usage, and view their logs.
Work is being done to improve Dashboard and in the future it should support other Kubernetes resources besides ReplicationControllers. You can see some mockups in the GitHub repo.
I'm one of the Dashboard UI maintainers.
Deployments will be shown in the UI in next release (a few weeks from now). I'm sorry this wasn't done before, but we had tight schedule. If you want to test the features sooner, use v1.1.0-beta2 version of the UI which will be released next week.