So I first start a minikube cluster (1.25.2) using minikube start. Then I bring up a kind cluster using some kind-Calico-conf.yml file:
$ kind create cluster --name=calico --config=./kind-Calico-conf.yml
When I later delete this Calico cluster (kind delete cluster --name=calico), I can see that the minikube cluster is gone too. Why is this happening?
Related
I am trying to determine how a Kubernetes cluster was provisioned. (Either using minikube, kops, k3s, kind or kubeadm).
I have looked at config files to establish this distinction but didn't find any.
Is there some way one can identify what was used to provision a Kubernetes cluster?
Any help is appreciated, thanks.
Usually but not always you can view the cluster(s) definition in your ~/.kube/config and you will see "entry" per cluster usually with the type.
Again it's not 100%.
Another option is to check the pods & ns, if you will see minikube it is almost certain minikube, k3s, rancher etc.
If you will see namespace *cattle - it can be rancher with a k3s or with RKE
To summarize it, there is no single answer to how to figure out how your cluster was deployed, but you can find hints for that
If you see kubeadm configmap object in kube-system namespace then it means that the cluster is provisioned using kubeadm.
My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy
For me, it is really strange to have a cluster without kubeproxy
My setup for rancher cluster is below:
Cluster Provider: RKE
Provision and Provision : Use existing nodes and create a cluster using RKE
Network Plugin : canal
Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.
Finally, I find my kibe-proxy. It is process of host not docker container.
In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.
root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16
I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.
When I do a minikube start I get:
Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
However, if I do this again I get the same output.
Is this creating a new cluster, reprovisioning the existing cluster or just doing nothing?
minikube start resumes the old cluster that is paused/stoped.
But if you delete the minikube machine by minikube delete or manually delete from virtualbox, minikube start will create a new cluster in that case.
MINIKUBE is a lightweight kubernetes implementation that creates a VM on your
local machine.
and deploys a cluster containing only one NODE.
so basically MINIKUBE is the only NODE in the Kubernetes cluster.
Any multiple attempts to start the minikube is directed to create ONE NODE ONLY.
$ kubectl get nodes
this return minikube node
but i came across this link which provides info on creating multiple nodes multi node cluster
I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.
The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:
kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
Then if I say:
kubectl get pods --namespace=kube-system
I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active.
You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh