Installing Database Driver in Superset running on Kubernetes - kubernetes

How can I install Database Driver in Superset Running on Top of Kubernetes.
based on the documentation, the recommended way is to add the pip installation inside docker requirement, but on the scenario of Kubernetes, I really don't wanna helm delete everything and then reinstall drivers.
Can I simply k8s exec inside Pods (superset & worker ) and pip install manually or I need to make changes inside requirement and do complete helm reinstall?

Related

kOps: Should i upgrade node ami images when upgrading kubernetes cluster to a new version?

I am using kOps to perform a manual cluster upgrade (from 1.17 to 1.18) as explained at https://kops.sigs.k8s.io/operations/updates_and_upgrades/#upgrading-kubernetes
I've noticed that kOps does not update the ami-image defined at spec.image at ig nodes, that means after cluster upgrade nodes are going to use the same base OS despite the kubernetes upgrade. But if you install 1.18 from scratch kOps uses the latest image available for that version.
should i update the version and configure it the same as the one kOps would use in case of an installation from scratch?
In 1.18 ami has moved from Debian to Ubuntu, should i take any precautions due to the change of operating system?
if you edit the manifests directly and do "kops update" etc ... then you need to also update the images, another alternative is to let kops do it for you by running "kops upgrade cluster " it will update the remote state and set the correct defaults etc ..
regarding the image change, i don't see any major issues there, what you can do is grab the current ami and do "sort of rollbacks" by replacing the image and updating the cluster ( or applying previous version of the manifest assuming you have s3 revisions on the state )
There was a bug up until kOps 1.18.2 where Ubuntu images were considered "custom" and therefore not upgraded by kops upgrade. See this bug
As of 1.18.2, you should see upgrades for Ubuntu as well.
There is no particular need to take any precaution when switching from Debian to Ubuntu unless you are using kOps hooks that would be Debian. kOps will take care of this change for you.

Install kubernetes on debian stretch server without systemd

I am trying to install Kubernetes on Debian 9 (stretch) server, which is on cloud and therefore can't do virtualization. And it doesn't have systemd. Also, I'm trying for really minimal configuration, not big cluster.
I've found Minikube, https://docs.gitlab.com/charts/development/minikube/index.html which is supposed to run without virtualization using docker, but it requires systemd, as mentioned here https://github.com/kubernetes/minikube/issues/2704 (and yes I get the related error message).
I also found k3s, https://github.com/rancher/k3s which can run either on systemd or openrc, but when I install openrc using https://wiki.debian.org/OpenRC I don't have the "net" service it depends on.
Then I found microk8s, https://microk8s.io/ which needs systemd simply because snapd needs systemd.
Is there some other alternative or solution to mentioned problems? Or did Poettering already bribed everyone?
Since you are well off the beaten path, you can probably just run things by hand with k3s. It's a single executable AFAIK. See https://github.com/rancher/k3s#manual-download as a simple starting point. You will eventually want some kind of service monitor to restart things if they crash, if not systemd then perhaps Upstart (which is not packaged for Deb9) or Runit (which itself usually runs under supervision).

Does Kube-proxy come with standard k8s installation on ubuntu or is it separate package?

I am trying to install kubernetes on ubuntu 16.04.
I am able to install other kubernetes components but i dont know if kube-proxy is installed? Should i get separate binary package for it or does it come prepackaged with kubernetes apt-get installation?
In most cases installing kube-proxy onthe node it self is not required as a common pattern is running kube-proxy as a DaemonSet in your kube cluster.
In regular apt-get packages you would normally find kubectl, kubeadm and kubelet. If you use kubeadm to create the cluster it will automatically prepare kube-proxy as well (in the form of a container, as the rest of the elements of the kubernetes control panel). Therefore, you wouldn't need to install it separately.
If you use the official kubernetes tarball and try to manually install the cluster by yourself, you will need to configure kube-proxy just like the rest of the elements, but the binaries will be included in the tarball. This documentation shows the essential options to configure it: https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy. Another resource is Kubernetes the hard way: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md

Enabling component manager in gcloud CLI

I made a fresh install of gcloud for ubuntu as instructed here. I want to use the additional components offered by gcloud like kubectl and docker.
So, when I tried typing gcloud components install kubectl, I get an error saying that The component manager is disabled for this installation. Here is the full error message:
This is because you installed google-cloud-sdk with a package manager like apt-get or yum.
kubectl:
If you look here you can see how to install additional components. Basically sudo apt-get install kubectl.
If by docker you mean the docker-credential-gcr then I don't know if there's a way to install using a package manager, can't seem to find it. Perhaps ou can try the github repo. Mind you, you don't need this for commands like gcloud docker -- push gcr.io/your-project/your-image:version.
If you mean actual docker for building images and running them locally, that's standalone software which you need to install separately, instructions here.
Alternatively, you can uninstall google-cloud-sdk with apt-get and then reinstall with interactive installer, which will support the suggested gcloud components install *

Can we install Kubernetes in a complete offline mode with kubeadm?

I need to install a Kubernetes cluster in complete offline mode. I can follow all the instructions at http://kubernetes.io/docs/getting-started-guides/scratch/ and install from binaries but that seems like an involved setup. The installation using kubeadm is pretty easy but I don't see any docs on whether I can install the cluster by downloading the .deb packages locally.
Any pointers to that direction are much appreciated.
I don't think that anyone has documented this yet. The biggest thing needed is to get the right images pre-loaded on every machine in the cluster. After that things should just work.
There was some discussion of this in this PR: https://github.com/kubernetes/kubernetes/pull/36759.
If I had the bandwidth I'd implement a kubeadm list-images so we could do docker save $(kubeadm list-images) | gzip > kube-images.tar.gz. You could manually construct that list by reading code and such.
Can we install Kubernetes in a complete offline mode with kubeadm?
Yes, I've already set up several offline clusters (1.15.x) with ansible and kubeadm. Mainly you need to prepare the following things in a USB drive and bring it to your offline environment.
.deb/.rpm files to install ansible
.deb/.rpm files to install docker
.deb/.rpm files to install kubeadm, kubectl, kubelet
Docker images of kubernetes cluster (You can find that with kubeadm config images list)
Docker images of kubernetes addons (flannel/calico, dashboard, etc)
Your ansible playbooks
The installation steps are as follow:
Install ansible with dpkg or rpm (manully)
Install docker with dpkg or rpm (via ansible tasks)
Install kubeadm, kubectl, kubelet with dpkg or rpm (via ansible tasks)
docker load all the docker images (via ansible tasks)
Run kubeadm init and kubeadm join (via ansible tasks)
There may be lots of details here. Feel free to leave your comments.