AWS EKS EFS-CSI Driver version and how to upgrade it - kubernetes

With lots of known issues[1][2] in EFS-CSI driver, i'm planning to do this driver upgrade on a running cluster.
I read most of AWS documentations but couldn't get any straight forward answers for following questions.
How to view current efs-csi driver version in EKS cluster.
How to upgrade the efs-csi driver to a specific version in a running cluster.
[1] https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/616
[2] https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/673

If you already have installed efs-csi drivers and used Helm to install efs-csi drivers, upgrade is so straightforward. Regarding to the documentation:
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa
Regards

Related

AKS: MongoError: not master

I'm using mongodb replicaset in Azure Kubernetes. I have two pods running for mongodb. I have created a service to connect both pods, which is working perfectly fine. But looks like now it is giving error while connecting to secondary pod:
[amqp] Channel consume error: MongoError: not master
errmsg: 'not master',
code: 10107,
codeName: 'NotMaster'
Can you please help me in case of I'm missing something
Source of MongoDB: Bitnami MongoDB Helm
I think you could try to use externalAccess.enabled=true parameter so you don't have to create the services manually. In combination with that, you could also use externalAccess.autoDiscovery.enabled=true.
$ helm install mongodb bitnami/mongodb \
--set architecture=replicaset \
--set externalAccess.enabled=true \
--set externalAccess.autoDiscovery.enabled=true \
--set rbac.create=true
BTW, it could be good to see more details of your installation parameters and environment so that we could help better.

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

Helm install Kong creating postgresql container and services in DB less mode

Helm is creating postgresql-0 container, postgresql and postgresql-headless services even in DB less mode. Below is my command.
helm install stable/kong --set ingressController.enabled=true --set postgresql.enabled=false --set env.database=off
When I use Yaml file it is not creating these components but with helm it is. Please let me know if I am missing something.
You can update on values.yaml with ingressController.enabled=true postgresql.enabled=false env.database=off and try.

Istio installation failed

I followed the instructions on https://github.com/IBM/cloud-native-starter/blob/master/documentation/IKSDeployment.md on my Mac, Kubernetes is running on IBM Cloud.
The command
$ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
hangs/does not return.
Couldn't validate the istio installation either:
$ istioctl verify-install
Error: unknown command "verify-install" for "istioctl"
Run 'istioctl --help' for usage.
We were able to solve to problem by following the steps prerequisites for the workshop: https://github.com/IBM/cloud-native-starter/blob/master/workshop/00-prerequisites.md#361-automated-creation-of-a-cluster-with-istio-for-the-workshop
Check out using our managed Istio offering. To install Istio on your Kubernetes cluster in IBM Cloud run one of the following...
Install Istio
ic ks cluster-addon-enable istio --cluster xxxx
Install Istio with extra (tracing and monitoring)
ic ks cluster-addon-enable istio-extras --cluster xxxx
Install Istio with Bookinfo
ic ks cluster-addon-enable istio-sample-bookinfo --cluster xxxx
In the above xxxx is the name of your cluster.

Unintended persistent storage in PostgreSQL with Helm

Short version: PostgreSQL deployed via Helm is persisting data between deployments unintentionally. How do I make sure data is cleared?
Long version: I'm currently deploying PostgreSQL via Helm this way, using it for a local development database for an application I'm building:
helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set service.type=LoadBalancer
When I'm done (or if I mess up the database so bad and need to clear it), I uninstall it:
helm del --purge testpg
(which confirms removal and kubectl get all confirms works)
However, when I spin the database up again, I'm surprised to see that the data and schema are still there when it has spun up.
How is the data persisting and how do I make sure I have a clean database each time?
Other details:
My Kubernetes Cluster is running in Docker Desktop v2.0.0.3
Your cluster may have a default volume provisioner configured.
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior
So even if you have no storage class configured a volume will be assigned.
You need to set helm value persistence.enabled to false.
The value is true by default:
https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml