Change proxy mode in kubernetes - kubernetes

I have installed kubernetes cluster using this tutorial on ubuntu 16.
everything works but I need change proxy mode (ipvs) and I don't know how can I change kube-proxy mode using kubectl or something else.

kubectl is more for managing the kubernetes workload. You need to modify the control plane itself. Since you created your cluster with kubeadm, you can use that to enable ipvs. You'd add this to your config file for kube init.
...
kubeProxy:
config:
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
...
Here's an article from github.com/kubernetes with more detailed instructions. Depending on your kubernetes version, you can pass it as a flag to kube init instead of using the above configuration.
Edit: Here's a link on how to use kubeadm to edit an existing cluster: How to use kubeadm upgrade to change some features in kubeadm-config

Related

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Is Kubernetes high availability using kubeadm possible without failover/load balancer?

I am trying to achieve the k8s high availability using kubeadm. I am following the document k8s HA using kubeadm
In the official document, it is recommended to have the failover mechanism/load balancer for the kube-apiserver. I tried keepalived but, in case of setup on aws/gcp instaces, it lands in split brain situation as multicast is not supported and so I am not allowed to use it. Is there any way out for this?
Kubernetes is a container-orchestration system for automating deployment, scaling, and management of containerized applications.
Kubernetes play best in High Available and Load Balancing environments.
As #jaxxstorm mentioned, cloud providers give you a possibility to use native load balancers, and I also suggest it is
a good pole position with High Availability attempt. You may be interested in GCP documentation.
Kubeadm on Kubernetes homebrewed environment requires some additional work, and from my point of view, it is good
to set up Kubernetes The Hardway then starts to play with Kubeadm.
OK, I assume servers for the installation are ready. To create a not complex installation of multi-master cluster, you need 3 masters node (10.0.0.50-52) and Load Balancer (10.0.0.200).
Generate token and save the output to file:
kubeadm token generate
Create a kubeadm config file:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- "http://10.0.0.50:2379"
- "http://10.0.0.51:2379"
- "http://10.0.0.52:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "10.0.0.50"
- "10.0.0.51"
- "10.0.0.52"
- "10.0.0.200"
- "127.0.0.1"
token: "YOUR KUBEADM TOKEN"
tokenTTL: "0"
Copy the config file to all nodes.
Do initialization on the first master instance:
kubeadm init --config /path/to/config.yaml
The new master instance, will have all the certificates and keys necessary for our master cluster.
Copy directory structure /etc/kubernetes/pki to other masters to the same location.
On other master servers:
kubeadm init --config /path/to/config.yaml
Now let’s start to set up load balancer:
Copy /etc/kubernetes/admin.conf into $HOME/.kube/config
next, edit $HOME/.config and replace
server:10.0.0.50
with
server:10.0.0.200
Check if nodes are working fine:
kubectl get nodes
On all workers execute:
kubeadm join --token YOUR_CLUSTER_TOKEN 10.0.0.200:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3
And that’s it! If everything was setup cleanly, you should now have a highly available cluster.
I found "HA Kubernetes cluster via Kubeadm" tutorial useful, thank you #Nate Baker for inspiration.
No, you need a loadbalancer to have HA with kubeadm.
If you're using AWS/GCP, why not consider using the native loadbalancers for those environments, like ELB or a Google Cloud Load Balancer?
You definnitely need nginx/haproxy + keepalived for failover and High availability

Kubernetes cluster name change

I'm creating a cluster with kubeadm init --with-stuff (Kubernetes 1.8.4, for reasons). I can setup nodes, weave, etc. But I have a problem setting the cluster name. When I open the admin.conf or a different config file I see:
name: kubernetes
When I run kubectl config get-clusters:
NAME
kubernetes
Which is the default. Is there a way to set the cluster name during init (there is no command line parameter)? Or is there a way to change this after the init? The current name is referenced in many files in /etc/kubernetes/
Best Regrads
Kamil
You can now do so using kubeadm's config file. PR here:
https://github.com/kubernetes/kubernetes/pull/60852
Using the kubeadm config you just set the following at the top level
clusterName: kubernetes
No, you cannot change a name of running cluster, because it serves for discovery inside a cluster and this would require near-simultaneous changing it across the cluster.
Sadly, you also cannot change a name of the cluster before init. Here is the issue on Github.
Update: From version 1.12, kubeadm allow you to change a cluster name before an "init" stage.
To do it (for sure for versions >=1.15, for lower versions commands can be different, commands changed somewhen between versions 1.12 and 1.15), you need to set clusterName value in a cluster configuration file like that:
Save default configuration to a file (cluster config is optional, so we need to do that step first for not to write it from scratch) by a kubeadm config print init-defaults < init-config.yaml command.
Set clusterName value in the config.
Run kubeadm init with a config argument: kubeadm init --config init-config.yaml

Can't run Kubernetes dashboard after installing Kubernetes cluster on rancher/server

Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.