In coreos we can defined service as
[X-Fleet]
Global=true
This will make sure that this particular service will run in all the nodes.
How do i achieve same thing for a pod in Kubernetes?
Probably you want to use Daemonset - a way to run a daemon on every node in a Kubernetes cluster.
Related
I am trying to determine how a Kubernetes cluster was provisioned. (Either using minikube, kops, k3s, kind or kubeadm).
I have looked at config files to establish this distinction but didn't find any.
Is there some way one can identify what was used to provision a Kubernetes cluster?
Any help is appreciated, thanks.
Usually but not always you can view the cluster(s) definition in your ~/.kube/config and you will see "entry" per cluster usually with the type.
Again it's not 100%.
Another option is to check the pods & ns, if you will see minikube it is almost certain minikube, k3s, rancher etc.
If you will see namespace *cattle - it can be rancher with a k3s or with RKE
To summarize it, there is no single answer to how to figure out how your cluster was deployed, but you can find hints for that
If you see kubeadm configmap object in kube-system namespace then it means that the cluster is provisioned using kubeadm.
Is there a way to make a kubernetes cluster to deploy first the statefulset and then all other deployments?
I'm working in GKE and I have a Redis pod which I want to get up and ready first because the other deployments depend on the connection to it.
You can use initcontainer in other deployments.Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met.
The init container can have a script which perform a readiness probe of the redis pods.
How do I be able to go to a specific Pod in a DaemonSet without hostNetwork? The reason is my Pods in the DaemonSet are stateful, and I prefer to have at most one worker on each Node (that's why I used DaemonSet).
My original implementation was to use hostNetwork so the worker Pods can be found by Node IP by outside clients. But in many production environment hostNetwork is disabled, so we have to create one NodePort service for each Pod of the DaemonSet. This is not flexible and obviously cannot work in the long run.
Some more background on how my application is stateful
The application works in an HDFS-taste, where Workers(datanodes) register with Masters(namenodes) with their hostname. The masters and outside clients need to go to a specific worker for what it's hosting.
hostNetwork is an optional setting and is not necessary. You can connect to your pods without specifying it.
To communicate with pods in DaemonSet you can specify hostPort in the DaemonSet’s pod spec to expose it on the node. You can then communicate with it directly by using the IP of the node it is running on.
Another approach to connect to stateful application is StatefulSet. It allows you to specify network identifiers. However it requires headless service for network identity of the Pods and you are responsible for creating such services.
I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.
In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :
kubectl get daemonSets --namespace=kube-system kube-proxy
and
kubectl get deployments --namespace=kube-system kube-dns
Why does one use daemonSets and the other deployments?
And what's the difference?
Kubernetes deployments manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd
Lets take the example you mentioned in your question: why iskube-dns a deployment andkube-proxy a daemonset?
The reason behind that is that kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make kube-proxy a daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of kube-dns is enough to resolve the service name to its IP. Hence we make kube-dns a deployment, because we don't need kube-dns on every node.