I am trying to determine how a Kubernetes cluster was provisioned. (Either using minikube, kops, k3s, kind or kubeadm).
I have looked at config files to establish this distinction but didn't find any.
Is there some way one can identify what was used to provision a Kubernetes cluster?
Any help is appreciated, thanks.
Usually but not always you can view the cluster(s) definition in your ~/.kube/config and you will see "entry" per cluster usually with the type.
Again it's not 100%.
Another option is to check the pods & ns, if you will see minikube it is almost certain minikube, k3s, rancher etc.
If you will see namespace *cattle - it can be rancher with a k3s or with RKE
To summarize it, there is no single answer to how to figure out how your cluster was deployed, but you can find hints for that
If you see kubeadm configmap object in kube-system namespace then it means that the cluster is provisioned using kubeadm.
Related
If I understood well, they are not visible in the cluster's dashboard (default Kubernetes dashboard).
How can I list them, and eventually modify them?
If you have Keda on the Kubernetes cluster, you can simply use kubectl to list the ScaledJobs and modify them.
To list the ScaledJobs:
kubectl get ScaledJob
Them to modify one of them:
kubectl edit ScaledJob {ScaledJobName}
I have a pod running in a statefulset but it needs to know the hostname or address of all pods running in another statefulset to communicate with them. The second statefulset is being created by a separate helm chart. Can the pod work this out dynamically? Can I inject this information into the pod through an env similar to setting .Status.ip?
Edit: Each statefulSet has its own headless service
As discussed in the comments, the way to go here is to use a service-resource as this will give you a static DNS within the cluster to reach all the pods that a targeted by that service.
The DNS for the service is:
the services name if you access it from within the same namespace
<my-service-name>.<namespace-name>.svc.cluster.local if you access it from another namespace, and where cluster.local is the clusters domain that might differ from cluster to cluster depending on the clusters configuration
If you further need more configuration options, e.g. when you want to deploy your chart into different cloud environments where the clusters-domain might actually differ, you can use kustomize.io to adjust your configuration at apply time.
I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.
I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?
It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation
In coreos we can defined service as
[X-Fleet]
Global=true
This will make sure that this particular service will run in all the nodes.
How do i achieve same thing for a pod in Kubernetes?
Probably you want to use Daemonset - a way to run a daemon on every node in a Kubernetes cluster.