Installing keycloak with already existing postgres that deployed though helm chart - postgresql

I have a postgres pod installed through helm chart.(used bitnami)
Now I want to connect this postgres to keycloak through helm chart. Can any one tell me how to connect in a detailed manner. I tried mentioning external database host, database name, database password, port, but not working.
Am I doing correct?
cammand i used:
helm install keycloak-test-db --set postgresql.enabled=false --set externalDatabase.host=localhost --set externalDatabase.user=postgres --set externalDatabase.password=9CD1SRNlrI --set externalDatabase.database=postgres --set externalDatabase.port=5432 bitnami/keycloak
Im setting postgres.enabled =false because I dont want to install postgres chart again.
I used external database method, is this correct or do I have to enable something?
I tried connecting existing postgres pod to newly installed keycloak through helm chart.
But it is not working.

Related

PostgreSQL bitnami Helm Chart does not update the user password

When I deploy the Bitnami PostgreSQL Helm Chart (chart version 10.9.4, appVersion: 11.13.0), the passwords for any user are not updated or changed after the first installation.
Lets say that for the first installation I use this command:
helm install postgresql --set postgresqlUsername=rpuser --set postgresqlPassword=rppass --set
postgresqlDatabase=reportportal --set postgresqlPostgresPassword=password2 -f
./reportportal/postgresql/values.yaml ./charts/postgresql
Deleting the Helm release also deletes the stateful set. After that, If I try to install PostgreSQL the same way but with different password values, these won't be updated and will keep the previous ones from the first installation.
Is there something I'm missing regarding where the users' passwords are stored? Do I have to remove the PV and PVC, or do they have nothing to do with this? (I know I can change the passwords using psql commands, but I'm failing to understand why this happens)
The database password and all of the other database data is stored in the Kubernetes PersistentVolume. Helm won't delete the PersistentVolumeClaim by default, so even if you helm uninstall && helm install the chart, it will still use the old database data and the old login credentials.
helm uninstall doesn't have an option to delete the PVC. This matches the standard behavior of a Kubernetes StatefulSet (there is an alpha option to automatically delete the PVC but it needs to be enabled at the cluster level and also requires modifying the StatefulSet object). When you uninstall the chart, you also need to manually delete the PVC:
helm uninstall postgresql
kubectl delete pvc postgresql-primary-data-0
helm install postgresql ... ./charts/postgresql

AKS: MongoError: not master

I'm using mongodb replicaset in Azure Kubernetes. I have two pods running for mongodb. I have created a service to connect both pods, which is working perfectly fine. But looks like now it is giving error while connecting to secondary pod:
[amqp] Channel consume error: MongoError: not master
errmsg: 'not master',
code: 10107,
codeName: 'NotMaster'
Can you please help me in case of I'm missing something
Source of MongoDB: Bitnami MongoDB Helm
I think you could try to use externalAccess.enabled=true parameter so you don't have to create the services manually. In combination with that, you could also use externalAccess.autoDiscovery.enabled=true.
$ helm install mongodb bitnami/mongodb \
--set architecture=replicaset \
--set externalAccess.enabled=true \
--set externalAccess.autoDiscovery.enabled=true \
--set rbac.create=true
BTW, it could be good to see more details of your installation parameters and environment so that we could help better.

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

Helm install Kong creating postgresql container and services in DB less mode

Helm is creating postgresql-0 container, postgresql and postgresql-headless services even in DB less mode. Below is my command.
helm install stable/kong --set ingressController.enabled=true --set postgresql.enabled=false --set env.database=off
When I use Yaml file it is not creating these components but with helm it is. Please let me know if I am missing something.
You can update on values.yaml with ingressController.enabled=true postgresql.enabled=false env.database=off and try.

Unintended persistent storage in PostgreSQL with Helm

Short version: PostgreSQL deployed via Helm is persisting data between deployments unintentionally. How do I make sure data is cleared?
Long version: I'm currently deploying PostgreSQL via Helm this way, using it for a local development database for an application I'm building:
helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set service.type=LoadBalancer
When I'm done (or if I mess up the database so bad and need to clear it), I uninstall it:
helm del --purge testpg
(which confirms removal and kubectl get all confirms works)
However, when I spin the database up again, I'm surprised to see that the data and schema are still there when it has spun up.
How is the data persisting and how do I make sure I have a clean database each time?
Other details:
My Kubernetes Cluster is running in Docker Desktop v2.0.0.3
Your cluster may have a default volume provisioner configured.
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior
So even if you have no storage class configured a volume will be assigned.
You need to set helm value persistence.enabled to false.
The value is true by default:
https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml