Is it possible to upgrade a k8s registered cluster in Rancher? - kubernetes

I'm considering to register a kubernetes cluster in Rancher. After that, how should I handle coming kubernetes upgrades? Can they be handled by Rancher itself?
I only found information about upgrading a k3s registered cluster.

You should be able to do it from the cluster view (if your cluster was installed via rancher) as documented:
From the Global view, find the cluster for which you want to upgrade Kubernetes. Select ⋮ > Edit.
Expand Cluster Options.
From the Kubernetes Version drop-down, choose the version of Kubernetes that you want to use for the cluster.
Click Save.

Related

GKE and NodeLocal DNSCache

We have a deployment of Kubernetes in Google Cloud Platform. Recently we hit one of the well known issues related on a problem with the kube-dns that happens at high amount of requests https://github.com/kubernetes/kubernetes/issues/56903 (its more related to SNAT/DNAT and contract but the final result is out of service of kube-dns).
After a few days of digging on that topic we found that k8s already have a solution witch is currently in alpha (https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/)
The solution is to create a caching CoreDNS as a daemonset on each k8s node so far so good.
Problem is that after you create the daemonset you have to tell to kubelet to use it with --cluster-dns option and we cant find any way to do that in GKE environment. Google bootstraps the cluster with "configure-sh" script in instance metadata. There is an option to edit the instance template and "hardcode" the required values but that is not an option if you upgrade the cluster or use the horizontal autoscaling all of the modified values will be lost.
The last idea was to use custom startup script that pull configuration and update the metadata server but this is a too complicated task.
As of 2019/12/10, GKE now supports through the gcloud CLI in beta:
Kubernetes Engine
Promoted NodeLocalDNS Addon to beta. Use --addons=NodeLocalDNS with gcloud beta container clusters create. This addon can be enabled or disabled on existing clusters using --update-addons=NodeLocalDNS=ENABLED or --update-addons=NodeLocalDNS=DISABLED with gcloud container clusters update.
See https://cloud.google.com/sdk/docs/release-notes#27300_2019-12-10
You can spin up another kube-dns deployment e.g. in different node-pool and thus having 2x nameserver in the pod's resolv.conf.
This would mitigate the evictions and other failures and generally allow you to completely control your kube-dns service in the whole cluster.
In addition to what was mentioned in this answer - With beta support on GKE, the nodelocal caches now listen on the kube-dns service IP, so there is no need for a kubelet flag change.

Kubernetes Helm chart initiation with Kubernetes cluster

I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)

Use Calico for policy and networking on AWS EKS?

AWS EKS makes use of their own CNI plugin and there are docs that allow you to install Calico for managing policy. For a number of reasons, I'd like to have Calico manage networking as well.
Based on the installation instructions I can't seem to find a way to do either option:
etcd
Doesn't seem viable as I can't find a way to access the EKS control plane etcd endpoints. If I were to deploy my own etcd pods inside the cluster, I need to use the AWS CNI plugin for those to get an IP address, so that doesn't work. I could bring my own etcd cluster outside of Kubernetes, but that seems a bit ridiculous.
Kubernetes API datastore
This option wants me to change setting to the controller which I don't have access to in the AWS EKS managed control plane.
The short answer is as of this writing EKS (nor GKE) doesn't give you direct access to any of the control plane components: etcd, kube-apiserver, kube-controller-manager, coredns/kube-dns, kube-scheduler.
They do have some docs on how to install Calico on an EKS cluster, but if you want more control you'll have to set up your own standalone cluster.
They might allow you access to the master components in the future but the bottom line is that EKS is a 'managed' service where they are supposed to take care of all your control plane components.

How to deploy Kube-Controller-Master?

I have installed Kubernetes with minikube, which is a single node cluster.
There is a yaml file to deploy controller master but it showing
Back-off restarting failed container Error syncing pod
Can someone solve this issue?
link for the yaml file is here https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/admin/high-availability/kube-controller-manager.yaml
The Kubernetes controller manager is a core component of Kubernetes and already running in every Kubernetes cluster, usually in form of a standalone pod managed by the Kubernetes addon manager. Minikube uses localkube which integrates the controller manager together with other Kubernetes core components in a single binary to simplify setup of single-node clusters for testing purposes. If you want to change options of the integrated controller manager or other components, use the --extra-config option of minikube start.
The example you linked is a custom deployment of the controller manager used for highly available multi-master clusters. If you want to test this you need to set up your cluster manually, minikube is not the right tool for this.

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209