With an existing Kubernetes Cluster (e.g. v 1.2.2 on GCE) that has set ENABLE_NODE_LOGGING=true and LOGGING_DESTINATION=gcp, what is the recommended way to stop those pods from running on each node and deploy a replacement DaemonSet that uses a custom fluentd configuration and docker image?
This should take into consider future Kubernetes upgrades as well.
If you set those configuration parameters when starting your cluster, it will create a manifest file on each node that configures fluentd to send container logs to google cloud logging. You can remove those manifest files and the kubelet will stop the fluentd containers (and you should also modify your instance template to change the parameters; otherwise any new nodes created, replacing broken nodes or scaling up your number of nodes, will continue to create fluentd containers).
Alternatively, if you modify the configuration parameter and run upgrade.sh to upgrade your nodes to a newer version of Kubernetes then your nodes will not have the manifest file and you won't be running the fluentd container any longer.
Related
According to the minikube handbook the configuration commands are used to "Configure your cluster". But what does that mean?
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
Are these the values it will reserve on the host machine in preparation for use?
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
Thanks for the help in advance.
Answering the question:
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
In short. It will be a limit for the whole resource (either a VM, a container, etc. depending on a --driver used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.
Are these the values it will reserve on the host machine in preparation for use?
I'd reckon this would be related to the --driver you are using and how its handling the resources. I personally doubt it's reserving the 100% of CPU and memory you've passed in the $ minikube start and I'm more inclined to the idea that it uses how much it needs during specific operations.
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
By default, when you create a minikube instance with: $ minikube start ... you will create a single node cluster capable of being a control-plane node and a worker node simultaneously. You will be able to run your workloads (like an nginx-deployment without adding additional node).
You can add a node to your minikube ecosystem with just: $ minikube node add. This will make another node marked as a worker (with no control-plane components). You can read more about it here:
Minikube.sigs.k8s.io: Docs: Tutorials: Multi node
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
As said previously, you don't need to delete the minikube cluster to add another node. You can run $ minikube node add to add a node on a minikube host. There are also options to delete/stop/start nodes.
Personally speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:
Kubeadm
Kubespray
Microk8s
This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, minikube works within a single host (like your laptop for example)).
A side note!
There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a minikube here :
Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
Additional resources:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
Github.com: Kubernetes sigs: Kubespray
Microk8s.io
I updated my Azure AKS nodepool size from within the Azure Portal to go from 2 to 4 nodes. When I run az aks nodepool show ..., I see that the count has correctly been updated. However, when I run kubectl get nodes, I still only see the two nodes that previously existed.
According to the Kubernetes documentation on node management,
There are two main ways to have Nodes added to the API server :
The kubelet on a node self-registers to the control plane
You, or another human user, manually add a Node object
(Emphasis mine)
My expectation, therefore, is that having scaled up my node pool, these new nodes should automatically register, and kubectl get nodes should just pick them up, but this appears to not be the case.
Now that my nodepool has more nodes, how do I get my AKS cluster to recognize and utilize them? Once kubectl get nodes shows them, will applying an updated manifest (with more replicas) be all I need to do to use the additional hardware?
It's difficult to see without access to your setup. But you can see:
Check that the control plane hasn't been automatically upgraded to a new version that is incompatible with the kubelet version in your nodepool when it registers with the cluster. (Best if the versions match)
Connect to the nodes that are not registering (ssh) and check the logs as to why the kubelet is not starting. i.e systectl status kubelet.
Check that you can connect to the port (i.e 8443) and IP address where your kube-apiserver is listening on from these nodes that are not registering. i.e curl <ip-address>:8443
Possible solution:
Upgrade the VM image of your node pool to use one compatible with the control plane.
Remove firewall rule preventing your nodes accessing the kube-apiserver
will applying an updated manifest (with more replicas) be all I need to do to use the additional hardware?
Should work.
✌️
I am trying to set up a job scheduler (airflow) on an EKS cluster to replace a scheduler (Jenkins) we're running directly on an ec2. This job scheduler should be able to deploy pods to the EKS cluster it's running on.
However, whenever I try to deploy the pod (with a pod manifest), I get the following error message:
Error from server (Forbidden): error when creating "deployment.yaml": pods "simple-pod" is forbidden: pod does not have "kubernetes.io/config.mirror" annotation, node "ip-xx.ec2.internal" can only create mirror pods
I believe the restriction has to do with the NodeRestriction plugin on the kube-apiserver running on the EKS Control Plane.
I have looked through documentation to see if I can turn this plugin off, however it does not appear to be possible through kubectl, and only possible by modifying the kube-apiserver configuration on control plane itself.
Is it possible to turn off this plugin? Or, is it possible to label a node or pod to mark that it is not subject to this plugin? More broadly, is running a job scheduler on EKS that assigns job on the same cluster a bad design choice?
If we wanted to containerize and deploy our job scheduler, do we need to instantiate a separate EKS cluster/other service to run it on?
I have installed Kubernetes with minikube, which is a single node cluster.
There is a yaml file to deploy controller master but it showing
Back-off restarting failed container Error syncing pod
Can someone solve this issue?
link for the yaml file is here https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/admin/high-availability/kube-controller-manager.yaml
The Kubernetes controller manager is a core component of Kubernetes and already running in every Kubernetes cluster, usually in form of a standalone pod managed by the Kubernetes addon manager. Minikube uses localkube which integrates the controller manager together with other Kubernetes core components in a single binary to simplify setup of single-node clusters for testing purposes. If you want to change options of the integrated controller manager or other components, use the --extra-config option of minikube start.
The example you linked is a custom deployment of the controller manager used for highly available multi-master clusters. If you want to test this you need to set up your cluster manually, minikube is not the right tool for this.
I have a Google Container Engine cluster with 21 nodes, there is one pod in particular that I need to always be running on a node with a static IP address (for outbound purposes).
Kubernetes supports DaemonSets
This is a way to have a pod be deployed to a specific node (or in a set of nodes) by giving the node a label that matches the nodeSelector in the DaemonSet. You can then assign a static IP to the VM instance that the labeled node is on. However, GKE doesn't appear to support the DaemonSet kind.
$ kubectl create -f go-daemonset.json
error validating "go-daemonset.json": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl create -f go-daemonset.json --validate=false
unable to recognize "go-daemonset.json": no kind named "DaemonSet" is registered in versions ["" "v1"]
When will this functionality be supported and what are the workarounds?
If you only want to run the pod on a single node, you actually don't want to use a DaemonSet. DaemonSets are designed for running a pod on every node, not a single specific node.
To run a pod on a specific node, you can use a nodeSelector in the pod specification, as documented in the Node Selection example in the docs.
edit: But for anyone reading this that does want to run something on every node in GKE, there are two things I can say:
First, DaemonSet will be enabled in GKE in version 1.2, which is planned for March. It isn't enabled in GKE in version 1.1 because it wasn't considered stable enough at the time 1.1 was cut.
Second, if you want to run something on every node before 1.2 is out, we recommend creating a replication controller with a number of replicas greater than your number of nodes and asking for a hostPort in the container spec. The hostPort will ensure that no more than one pod from the RC will be run per node.
DaemonSets is still alpha feature and Google Container Engine supports only production Kubernetes features. Workaround: build your own Kubernetes cluster (GCE, AWS, bare metal, ...) and enable alpha/beta features.