Problem statement:
Currently we are running k8s in multiple environments e.g. dev, uat,staging.
It becomes very difficult to identify for us just by looking at k8s dashboard UI.
Do we have any facility to customize k8s dashboard indicating somewhere in header or footer cluster or environment we are using?
Since K8S is open source, you should have the ability to do whatever you want. You will ofcourse need to play with the code and build you own custom dashboard image.
You can start off from here
https://github.com/kubernetes/dashboard/tree/master/src/app/frontend
This feature was released back in 2017, with the introduction of the settings ConfigMap. You just need to set the values of the kubernetes-dashboard-settings ConfigMap in kubernetes-dashboard namespace. You don't even need to restart the dashboard service/deployment.
Related
I am using the official kubernetes dashboard in Version kubernetesui/dashboard:v2.4.0 to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?
If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use kubectl logs in order to see whats going on.
Is there any way to configure the dashboard in a way so that more lines of logs get displayed?
AFAIK, it is not possible with kubernetesui/dashboard:v2.4.0. On the list of dashboard arguments that allow for customization, there is no option to change the amount of logs displayed.
As a workaround you can use Prometheus + Grafana combination or ELK kibana as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as skooner (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.
If anyone is interested: As the feature that i was looking for does not exist yet, i have submitted a feature request in GitHub. You can see it here: https://github.com/kubernetes/dashboard/issues/6700
I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.
I am creating a server group and I want to add a label to the deployment. I don't find any option in the spinnaker UI to add one. Any help on this?
The current version of the Kubernetes cloud provider (v1) does not support configuring labels on Server Groups.
The new Kubernetes Provider (v2), which is manifest-based, allows you to configure labels. This version, however, is still in alpha.
Sources
https://github.com/spinnaker/spinnaker/issues/1624
https://www.spinnaker.io/reference/providers/kubernetes-v2/
After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.
I've created a deployment like this:
kubectl run my-app --image=ecr.us-east-1.amazonaws.com/my-app:v1 -l name=my-app --replicas=1
Now I goto the Kubernetes Dashboard:
https://172.0.0.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
But I dont see my-app listed here.
Is it possible to use the Kubernetes Dashboard to view deployments? I'd like to use the dashboard to do things like view the deployments mem/cpu usage, check logs, etc
Kubernetes Dashboard is pretty limited at the moment, and only supports ReplicationControllers. If you create a ReplicationController then you will be able to see the Pods connected to it, check their memory and CPU usage, and view their logs.
Work is being done to improve Dashboard and in the future it should support other Kubernetes resources besides ReplicationControllers. You can see some mockups in the GitHub repo.
I'm one of the Dashboard UI maintainers.
Deployments will be shown in the UI in next release (a few weeks from now). I'm sorry this wasn't done before, but we had tight schedule. If you want to test the features sooner, use v1.1.0-beta2 version of the UI which will be released next week.