Kubernetes has master and minion nodes.
Will (can) Kubernetes run specified Docker containers on the master node(s)?
I guess another way of saying it is: can a master also be a minion?
Thanks for any assistance.
Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster.
A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine.
This is going to change significantly when Issue #6087 is implemented.
You need to taint your master node to run containers on it, although not recommended.
Run this on your master node:
kubectl taint nodes --all node-role.kubernetes.io/master-
Courtesy of Alex Ellis' blog post here.
You can try this code:
kubectl label node [name_of_node] node-short-name=node-1
Create yaml file (first.yaml)
apiVersion: v1
kind: Pod
metadata:
name: nginxtest
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-short-name: node-1
Create a pod
kubectl create –f first.yaml
Related
Do you know if it's possible with Kubernetes cli to create a file on a specific node ?
i mean you are working since the master and you want to create a file on node2 only .
You can deploy Pod on specific node by specifying nodeName in it's manifest, for example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kube-01
If you are on the master node which naturally has network connectivity with worker nodes why don't you simply use scp command ?
But answering your specific question: No, you can not create a file on a worker node simply by using kubectl command.
You can create a Pod based on specific yaml manifest which will be scheduled on a specific node and will create such file on that node for you. It can be scheduled on a specific node automatically, based on node affinity rule, defined in a local PersistentVolume (btw. this is one of its advantage over hostpath), which will be used for sharing data between your Pod and node. So basically your Pod can write directly to a specific path on a specific node, on which it will be automatically scheduled, based on nodeAffinity defined in local PV.
I am fairly new to Kubernates and what I am able to understand so far,
cluster is collection of node(s)
each node can have a set of running container(s)
set of tightly coupled container(s) itself can be grouped together to form a pod (despite of the node in which the container is running).
First of all, am I correct so far?
secondly, and going through docs about kube-scheduler says,
Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.
and docs also says pods are,
The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
My question, rather confusion is since we have already containers running in different nodes, why do we need additional node to run a pod on ?
cluster is collection of node(s)
each node can have a set of running container(s)
You are correct.
set of tightly coupled container(s) itself can be grouped together to form a pod (despite of the node in which the container is running).
All containers belonging to a pod run on the same node.
My question, rather confusion is since we have already containers
running in different nodes, why do we need additional node to run a
pod on ?
It's not the pod that actually runs. The only things that actually run on your nodes are containers. Pod is just a logical grouping of containers and is the basic unit in kubernetes to create a container. (Docker container logo is a whale, a group of whales is called a pod if you want a parallel to remember this). So if the containers that belong to the pod are running, the pod is termed as running.
In the following pod specification, nginx-container and debian-container containers belong to the pod named two-containers. When you create this pod object, kube-scheduler will select a node to run this pod (i.e., to run the two containers) and assigns a node to the pod. The kubelet running on that node then gets notified and starts the two containers on the node. Since the two containers belong to same pod, they are run in same network namespace.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Number 1 & 3 are correct.
For number 2 i would say 'Each node can have set pods and each pod can have 1 or more than 1 containers'
and for your last question lets say you create a deployment having 3 pods 2 of them were deployed to node A and it's resources get consumed by 2 of them (No memory or cpu left) but 3rd pod will be in pending state as long as their is no new node to run that pod.
Their is a concept of horizontal pod auto-scaling and cluster auto-scaling
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ & https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
These will further clear your confusion
Using KOPS tool, I deployed a cluster with:
1 Master
2 slaves
1 Load Balancer
Now, I am trying to deploy an Argo Workflow, but I don't know the process. Will it install on Node or Master of the k8s cluster I built? How does it work?
Basically, if anyone can describe the functional flow or steps of deploying ARGO work flow on kubernetes, it would be nice. First, I need to understand where is it deployed on Master or Worker Node?
Usually, kops creates Kubernetes cluster with taints on a master node that prevent regular pods scheduling on it.
Although, there was an issues with some cluster network implementation, and sometimes you are getting a cluster without taints on the master.
You can change taints on the master node by running the following commands:
add taints (no pods on master):
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
remove taints (allow to schedule pods on master):
kubectl taint nodes --all node-role.kubernetes.io/master-
If you want to know whether the taints are applied to the master node of not, run the following command:
kubectl get node node-master --export -o yaml
Find a spec: section. In case the taints are present, you should see something like this:
...
spec:
externalID: node-master
podCIDR: 192.168.0.0/24
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...
This question already has an answer here:
Scheduler is not scheduling Pod for DaemonSet in Master node
(1 answer)
Closed 5 years ago.
I am running a cluster with 1 master and 1 node. Now, when I run daemon set it only shows 1 desired node, while it should be 2. There is no error I could find anywhere in the describe/logs, but the daemonset only chooses 1 node to run. I am using kubernetes 1.9.1.
Any idea what I can be doing wrong? Or how to debug it?
TIA.
This happens if the k8s master node has just the node-role.kubernetes.io/master: NoSchedule taint without toleration for it.
The the node-role.kubernetes.io/master: NoSchedule toleration is needed in k8s 1.6 or later to schedule daemonsets on master nodes.
Add the following toleration for the daemonset in the YAML file to make k8s schedule daemonsets on the master node too:
...
kind: DaemonSet
spec:
...
template:
...
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Taints of the master node can be checked by:
kubectl describe node <master node>
Tolerations of a pod can be checked by:
kubectl describe pod <pod name>
More info about daemonsets is in https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-
We have deployed Kubernetes using OpenStack Heat on CoreOS. The below command of fetching nodes does not get any results:
kubectl -s http://<Master FIP>:8080 get nodes
On looking at the minion, we saw that kubelet cannot talk to the master. kubelet on minion has these errors.
In the master node, the container - hyperkube controller displays the below error (10.0.0.4 is the private IP for the master):
W0909 17:42:34.411146 1 request.go:347] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
I0909 17:42:34.465422 1 endpoints_controller.go:322] Waiting for pods controller to sync, requeuing service default/kubernetes
W0909 17:43:04.249935 1 nodecontroller.go:433] Unable to find Node: 10.0.0.4, deleting all assigned Pods.
E0909 17:43:04.284611 1 nodecontroller.go:434] pods "kube-apiserver-10.0.0.4" not found
I am not sure how should we debug this. Could someone please suggest what could be wrong.
Thanks
This was resolved by making the hyperkube version same on master and minion nodes. In our case, we updated it to v1.3.4. (Used gcr.io/google_containers/hyperkube:v1.3.4)