Pip installing a package inside of a Kubernetes cluster - kubernetes

I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to pip install a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:
kubectl exec -it superset-4934njn23-nsnjd /bin/bash
Inside there's no python available, no pip and apt-get doesn't find most of the packages.
I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.
But all this seems too complicated for a simple pip install, is there a simpler way to do this?
Links:
Docker- https://hub.docker.com/r/amancevice/superset/
Helm Chart - https://github.com/helm/charts/tree/master/stable/superset

As #Murli mentioned, you should use pip3. However, one thing you should remember is, helm is for managing k8s, i.e. what goes into the cluster should be traceable. So I recommend you the following:
$ helm get stable/superset
modify the values.yaml. In my case, I added jenkins-job-builder to pip3:
initFile: |-
pip3 install jenkins-job-builder
/usr/local/bin/superset-init --username admin --firstname admin --lastname user --email admin#fab.org --password admin
superset runserver
and just pass the values.yaml to helm install.
$ helm install --values=values.yaml stable/superset
Thats it.
$ kubectl exec -it doltish-gopher-superset-696448b777-8b9c6 which jenkins-jobs
/usr/local/bin/jenkins-jobs
$

Docker file seems to be installing python3 package.
Try 'python3' or "pip3" instead of 'python'/'pip'

Make the container, a little more dev work and many fewer alerts from pager duty

Related

Entando 6 Installation Issue

I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

How to backup Helm3 nginx controller configs and update the running LoadBalancer service?

Very new to kubernetes. I've been getting confused by documentation and example differences between Helm2 and 3.
I installed the stable/nginx-ingress chart via helm install app-name stable/nginx-ingress.
1st question:
I need to update the externalTrafficPolicy to Local. I learned later I could have set that during the install process via adding --set controller.service.externalTrafficPolicy=Local to the helm command.
How can I update the LoadBalancer service with the new setting without removing the ingress controller and reinstalling?
2nd question:
Helm3 just downloaded and setup the ingress controller and didn't save anything locally. Is there a way to backup all my my k8s cluster configs (other than the ones I've created manually)?
To upgrade and dump the YAML deployed (for a backup of the ingress release)
helm upgrade <your-release-name> stable/nginx-ingress \
--reuse-values \
--set controller.service.externalTrafficPolicy=Local \
--output yaml
For a public chart you may want to set the --version option to the existing installed version of the chart you used. In case you don't want to any updates from newer versions to be applied along with the setting.
For complete dumps, have a look through this github issue. All options there are a bit dodge though with edge cases. I would recommend having everything re-deployable from something like git, all the way from cluster to apps. Anyone who makes edits by hand can then be shot (Well.. at least have clusters regularly redeployed on them :)
Is there a way to backup all my my k8s cluster configs
kubectl cluster-info dump shows some info about the k8s cluster.
Configs and manifests (yaml files) of k8s itself will be at /etc/kubernetes/ on the master node.
I've been able to dump manifests of all resources in all namespaces in k8s using the following bash script, please edit as needed:
#!/usr/bin/env bash
while read -r line
do
output=$(kubectl get "$line" --all-namespaces -o yaml 2>/dev/null | grep '^items:')
if ! grep -q "\[\]" <<< $output; then
echo -e "\n======== "$line" manifests ========\n"
kubectl get "$line" --all-namespaces -o yaml
fi
done < <(kubectl api-resources | awk '{print $1}' | grep -v '^NAME')
Above bash script was tested with:
k8s v1.16.3
Ubuntu Bionic 18.04.3 OS
bash version version 4.4.20(1)-release (x86_64-pc-linux-gnu)
I suggest not to use the dump/manifests of an existing k8s cluster to create a new k8s cluster, just refer them as backup, and use an installer like Kubeadm to re-install k8s.
I've been getting confused by documentation and example differences between Helm2 and 3.
If you're interested, check helm-2to3 tool - it migrates configs and data from helm 2 to helm 3 using a command like helm 2to3 move config.

Create a deployment from a pod in kubernetes

For a use case I need to create deployments from a pod when a script is being executed from inside the pod.
I am using google container engine for my cluster.
How to configure the container inside the pod to be able to run commands like kubectl create deployment.yaml?
P.S A bit clueless about it at the moment.
Your container is going to need to have kubectl available. There are some container images available, personally I can't vouch for any of them.
Personally I'd probably build my own and download the latest kubectl. A Dockerfile like this is probably a good starting point
FROM alpine:latest
RUN apk --no-cache add curl
RUN curl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl
This will build you a container image with kubectl, so you can then all the kubectl commands you want.

Local Kubernetes on CentOS

I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes