When I deploy the Bitnami PostgreSQL Helm Chart (chart version 10.9.4, appVersion: 11.13.0), the passwords for any user are not updated or changed after the first installation.
Lets say that for the first installation I use this command:
helm install postgresql --set postgresqlUsername=rpuser --set postgresqlPassword=rppass --set
postgresqlDatabase=reportportal --set postgresqlPostgresPassword=password2 -f
./reportportal/postgresql/values.yaml ./charts/postgresql
Deleting the Helm release also deletes the stateful set. After that, If I try to install PostgreSQL the same way but with different password values, these won't be updated and will keep the previous ones from the first installation.
Is there something I'm missing regarding where the users' passwords are stored? Do I have to remove the PV and PVC, or do they have nothing to do with this? (I know I can change the passwords using psql commands, but I'm failing to understand why this happens)
The database password and all of the other database data is stored in the Kubernetes PersistentVolume. Helm won't delete the PersistentVolumeClaim by default, so even if you helm uninstall && helm install the chart, it will still use the old database data and the old login credentials.
helm uninstall doesn't have an option to delete the PVC. This matches the standard behavior of a Kubernetes StatefulSet (there is an alpha option to automatically delete the PVC but it needs to be enabled at the cluster level and also requires modifying the StatefulSet object). When you uninstall the chart, you also need to manually delete the PVC:
helm uninstall postgresql
kubectl delete pvc postgresql-primary-data-0
helm install postgresql ... ./charts/postgresql
Related
I have a postgres pod installed through helm chart.(used bitnami)
Now I want to connect this postgres to keycloak through helm chart. Can any one tell me how to connect in a detailed manner. I tried mentioning external database host, database name, database password, port, but not working.
Am I doing correct?
cammand i used:
helm install keycloak-test-db --set postgresql.enabled=false --set externalDatabase.host=localhost --set externalDatabase.user=postgres --set externalDatabase.password=9CD1SRNlrI --set externalDatabase.database=postgres --set externalDatabase.port=5432 bitnami/keycloak
Im setting postgres.enabled =false because I dont want to install postgres chart again.
I used external database method, is this correct or do I have to enable something?
I tried connecting existing postgres pod to newly installed keycloak through helm chart.
But it is not working.
Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Helm is creating postgresql-0 container, postgresql and postgresql-headless services even in DB less mode. Below is my command.
helm install stable/kong --set ingressController.enabled=true --set postgresql.enabled=false --set env.database=off
When I use Yaml file it is not creating these components but with helm it is. Please let me know if I am missing something.
You can update on values.yaml with ingressController.enabled=true postgresql.enabled=false env.database=off and try.
I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Run upgrade
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable.
I hope it helps.
Added the following to my gCloud. I run it everytime I update my config.yaml file. Make sure to be connected to the correct Kubernetes cluster before running.
update.sh
# Installs Helm.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# Make Helm aware of the JupyterHub Helm chart repo.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
# Re-installs the chart configured by your config.yaml.
RELEASE=jhub
JUPYTERHUB_VERSION=0.9.0
helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=${JUPYTERHUB_VERSION} \
--values config.yaml
Short version: PostgreSQL deployed via Helm is persisting data between deployments unintentionally. How do I make sure data is cleared?
Long version: I'm currently deploying PostgreSQL via Helm this way, using it for a local development database for an application I'm building:
helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set service.type=LoadBalancer
When I'm done (or if I mess up the database so bad and need to clear it), I uninstall it:
helm del --purge testpg
(which confirms removal and kubectl get all confirms works)
However, when I spin the database up again, I'm surprised to see that the data and schema are still there when it has spun up.
How is the data persisting and how do I make sure I have a clean database each time?
Other details:
My Kubernetes Cluster is running in Docker Desktop v2.0.0.3
Your cluster may have a default volume provisioner configured.
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior
So even if you have no storage class configured a volume will be assigned.
You need to set helm value persistence.enabled to false.
The value is true by default:
https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml