Can I extend serviceNodePortRange in running kops cluster without restart - kubernetes

I have the kops cluster running on AWS. I would like to extend the service node port range of that cluster without restart the cluster.
Is it possible? If yes, how can it be done?

You can change it, but it does require a reboot of the control plane nodes (but not the woker nodes). This is due to the "immutable" nature of the kOps configuration.
To change the range, add this to your cluster spec:
spec:
kubeAPIServer:
serviceNodePortRange: <range>
See the cluster spec for more information.
Keep ensure you do not conflict with the ports kOps require

Related

K8s anti affinity for different clusters

I have a deployment file for a k8s cluster in the cloud with an anti affinity rule which prevents multiple pods of same deployment on the same node. This works well but not for my local k8s which uses a single node. I can't seem to find a way to use same deployment file for remote cluster and local cluster.
I have tweaked the affinity and node selector rules to no avail.

How can I find out how a Kubernetes Cluster was provisioned?

I am trying to determine how a Kubernetes cluster was provisioned. (Either using minikube, kops, k3s, kind or kubeadm).
I have looked at config files to establish this distinction but didn't find any.
Is there some way one can identify what was used to provision a Kubernetes cluster?
Any help is appreciated, thanks.
Usually but not always you can view the cluster(s) definition in your ~/.kube/config and you will see "entry" per cluster usually with the type.
Again it's not 100%.
Another option is to check the pods & ns, if you will see minikube it is almost certain minikube, k3s, rancher etc.
If you will see namespace *cattle - it can be rancher with a k3s or with RKE
To summarize it, there is no single answer to how to figure out how your cluster was deployed, but you can find hints for that
If you see kubeadm configmap object in kube-system namespace then it means that the cluster is provisioned using kubeadm.

affintity and anti-affinity between pods. ensure webapp connect to local redis cache

In the documentation about affinity and anti-affinity rules for kubernetes there is a pratical use case arround a web application and a local redis cache.
The redis deployment has PodAntiAffinity configured to ensure the scheduler does not co-locate replicas on a single node.
The webapplication deployment has a pod affinity to ensure the app is scheduled with the pod that has label store (Redis).
To connect to the redis from the webapp we would have to define a service.
Question: How are we sure that the webapp will always use the redis that is co-located on the same node and not another one? If I read the version compatibility from Kubernetes v1.2 the iptables mode for kube-proxy became the default.
Reading the docs about iptable mode for kube-proxy it says by default, kube-proxy in iptables mode chooses a backend at random.
So my answer to the question would be:
No we can't be sure. If you want to be sure then put the redis and webapp in one pod?
This can be configured in the (redis) Service, but in general it is not recommended:
Setting spec.externalTrafficPolicy to the value Local will only proxy requests to local endpoints, never forwarding traffic to other nodes
This is a complex topic, read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
https://kubernetes.io/docs/concepts/services-networking/service/

How to deploy an etcd cluster on a Kubernetes cluster with a previous etcd service

I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.

How to install kubernetes cluster on Rancher cluster?

Use Rancher server on several servers:
Master
Node1
Node2
Node3
Maybe only need rancher agent on node servers.
Also want to make kubernetes cluster on these servers. So install Kubernetes master on Rancher master, install Kubernetes nodes(kubelet) on Rancher nodes. Is it right?
So, the Kubernetes nodes can't install using Rancher server but should do it by self?
You will need a Rancher Agent on any server you want Rancher to place containers on. Rancher can deploy Kubernetes for you. I believe what you want to do is add all of the nodes, including the Rancher master, to a single Cattle environment(The Default env is Cattle). When adding the Rancher Server make sure you set CATTLE_AGENT_IP=. Once the hosts are registered, you will want to set host labels on the nodes. For nodes 1,2,3 you will set the label compute=true. On the Rancher Server you will set host 2 host labels, etcd=true and orchestration=true.
Once the labels are set up. Click on Catalog and search for Kubernetes. You can probably stick with most defaults, but CHANGE plane isolation to required.
Rancher should deploy Kubernetes management servers on the same host as your Rancher Server and the remaining nodes will be Kuberenetes minions.