Failing to connect to the postgress statefulset database installed from stable/postgresql helm chart - kubernetes

Do you guys have played with the stable/postgresql helm chart?
I successfully install the release using this command within a GKE context
$helm install --name pg-set -f ./values-production.yaml stable/postgresql --set postgresqlDatabase=nixmind-db
It went fine, but I can't further connect to my cluster for ckeck/tests..., cause it fails connecting to the database due to password.
I tried all the following way but got the same error :
$export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pg-set-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$kubectl run pg-set-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pg-set-postgresql -U postgres -d nixmind-db
psql: FATAL: password authentication failed for user "postgres"
pod "pg-set-postgresql-client" deleted
pod default/pg-set-postgresql-client terminated (Error)
$export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pg-set-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$kubectl run pg-set-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="POSTGRESS_PASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pg-set-postgresql -U postgres -d nixmind-db
psql: FATAL: password authentication failed for user "postgres"
pod "pg-set-postgresql-client" deleted
pod default/pg-set-postgresql-client terminated (Error)
$export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pg-set-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$kubectl run pg-set-postgresql-client --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="POSTGRESQL_PASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pg-set-postgresql -U postgres -d nixmind-db
If you don't see a command prompt, try pressing enter.
psql: fe_sendauth: no password supplied
pod default/pg-set-postgresql-client terminated (Error)
$export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pg-set-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$kubectl run pg-set-postgresql-client --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="POSTGRESQL_PASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pg-set-postgresql -U postgres -d nixmind-db
If you don't see a command prompt, try pressing enter.
psql: fe_sendauth: no password supplied
pod default/pg-set-postgresql-client terminated (Error)
Thy to connect localy to the client and run the command
$kubectl run pg-set-postgresql-client --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="POSTGRES_PASSWORD=$POSTGRES_PASSWORD" --command -- /bin/bash
If you don't see a command prompt, try pressing enter.
I have no name!#pg-set-postgresql-client:/$ env
.............................................................................
POSTGRESQL_PASSWORD=m7RWxRvpSk
POSTGRESQL_USERNAME=postgres
POSTGRESQL_NUM_SYNCHRONOUS_REPLICAS=0
POSTGRESQL_INITDB_ARGS=
KUBERNETES_SERVICE_HOST=10.63.240.1
NAMI_VERSION=1.0.0-1
.............................................................................
I have no name!#pg-set-postgresql-client:/$ psql --host pg-set-postgresql -U postgres -d nixmind-db
Password for user postgres:
psql: FATAL: password authentication failed for user "postgres"
It seems to be a problem being submitted several time, but without real solution...
How do you guys test this release after installation?
What am I missing? Is the password retrieved from the cluster secrets the one configured on the master database?
May be this command psql --host pg-set-postgresql -U postgres -d nixmind-db tries to access the slave and not master really?
Why the psql client doesn't even read the password from environment variables as expected?

It's all OK.
I was characters issue with my terminal, the password retrieved from cluster credentials is the good one, and the database is accessible.

Related

Error installing TimescaleDB with K8S / Helm : MountVolume.SetUp failed for volume "certificate" : secret "timescaledb-certificate" not found

I just tried to install timescaleDB Single with Helm in minikube on Ubuntu 20.04.
After installing via:
helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
I got the message:
➜ ~ helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
NAME: timescaledb
LAST DEPLOYED: Fri Aug 7 17:17:59 2020
NAMESPACE: espace-client-v2
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster:
timescaledb.espace-client-v2.svc.cluster.local
To get your password for superuser run:
# superuser password
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
# admin password
PGPASSWORD_ADMIN=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_admin_PASSWORD}" | base64 --decode)
To connect to your database, chose one of these options:
1. Run a postgres pod and connect using the psql cli:
# login as superuser
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_POSTGRES" \
--command -- psql -U postgres \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
# login as admin
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_ADMIN" \
--command -- psql -U admin \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
2. Directly execute a psql session on the master node
MASTERPOD="$(kubectl get pod -o name --namespace espace-client-v2 -l release=timescaledb,role=master)"
kubectl exec -i --tty --namespace espace-client-v2 ${MASTERPOD} -- psql -U postgres
It seemed to have installed well.
But then, when executing:
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
Error from server (NotFound): secrets "timescaledb-credentials" not found
After that, I realized pod has not even been created, and it gives me the following errors
MountVolume.SetUp failed for volume "certificate" : secret "timescaledb-certificate" not found
Unable to attach or mount volumes: unmounted volumes=[certificate], unattached volumes=[storage-volume wal-volume patroni-config timescaledb-scripts certificate socket-directory timescaledb-token-svqqf]: timed out waiting for the condition
What should I do ?
I could do it. If the page https://github.com/timescale/timescaledb-kubernetes doesn't give much details about installation process, you can go here:
https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
I had to use kustomize to generate content:
./generate_kustomization.sh my-release
and then it generate several files:
credentials.conf kustomization.yaml pgbackrest.conf timescaledbMap.yaml tls.crt tls.key
then I did:
kubectl kustomize ./
which generated a k8s yml file, which I saved with the name timescaledbMap.yaml
Finally, I did:
kubectl apply -f timescaledbMap.yaml
Then it created all necesarry secrets, and I could install chart
. Hope it helps others.

Restart=Never causes the MongoDB pod to terminate

I am trying to follow the instructions here: https://github.com/bitnami/charts/tree/master/bitnami/mongodb
1) helm install mongorelease --set mongodbRootPassword=secretpassword,mongodbUsername=my-user,mongodbPassword=my-password,mongodbDatabase=my-database bitnami/mongodb
which says:
To connect to your database run the following command:
kubectl run --namespace default mongorelease-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
I run the command above (replacing $MONGODB_ROOT_PASSWORD with my password) and I see this error:
error: invalid restart policy: 'Never'
See 'kubectl run -h' for help and examples
I remove the single quotes around Never and see this:
MongoDB shell version v4.2.5
connecting to: mongodb://mongorelease-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
2020-04-11T10:04:52.187+0000 E QUERY [js] Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-04-11T10:04:52.189+0000 F - [main] exception: connect failed
2020-04-11T10:04:52.189+0000 E - [main] exiting with code 1
pod "mongorelease-mongodb-client" deleted
pod default/mongorelease-mongodb-client terminated (Error)
I then remove --restart=Never from the command and run it again. It then works expected and I can interact with MongoDB, however I am presented with this warning:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
What is the command I should be using?
--restart=Never creates a pod. So you instead can run this command with --generator=run-pod/v1 to create a pod. This avoids usage of --restart=Never and also the deprecation warning will not be there.
kubectl run --rm --grace-period=1 --force=true --generator=run-pod/v1 --namespace default mongorelease-mongodb-client --tty -i --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

How to connect to OpenLDAP which created by official helm chart?

Using Helm 3 installed OpenLDAP:
helm install openldap stable/openldap
Got this message:
NAME: openldap
LAST DEPLOYED: Sun Apr 12 13:54:45 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
OpenLDAP has been installed. You can access the server from within the k8s cluster using:
openldap.default.svc.cluster.local:389
You can access the LDAP adminPassword and configPassword using:
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_CONFIG_PASSWORD}" | base64 --decode; echo
You can access the LDAP service, from within the cluster (or with kubectl port-forward) with a command like (replace password and domain):
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Test server health using Helm test:
helm test openldap
You can also consider installing the helm chart for phpldapadmin to manage this instance of OpenLDAP, or install Apache Directory Studio, and connect using kubectl port-forward.
However I can't use this command to search content on ldap server in the k8s cluster:
export LDAP_ADMIN_PASSWORD=[REAL_PASSWORD_GET_ABOVE]
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Got error
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
I also login to the pod to run
kubectl exec -it openldap -- /bin/bash
# export LDAP_ADMIN_PASSWORD=[REAL_PASSWORD_GET_ABOVE]
# ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
The same.
As it's stated in the notes:
NOTES:
OpenLDAP has been installed. You can access the server from within the k8s cluster using:
openldap.default.svc.cluster.local:389
You can access the LDAP adminPassword and configPassword using:
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_CONFIG_PASSWORD}" | base64 --decode; echo
You can access the LDAP service, from within the cluster (or with kubectl port-forward) with a command like (replace password and domain):
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Test server health using Helm test:
helm test openldap
You can also consider installing the helm chart for phpldapadmin to manage this instance of OpenLDAP, or install Apache Directory Studio, and connect using kubectl port-forward.
You can do:
$ kubectl port-forward services/openldap 3389:389
Forwarding from 127.0.0.1:3389 -> 389
Forwarding from [::1]:3389 -> 389
Handling connection for 3389
From another shell, outside the Kubernetes cluster:
$ kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
l3dkQByvzKKboCWQRyyQl96ulnGLScIx
$ ldapsearch -x -H ldap://localhost:3389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w l3dkQByvzKKboCWQRyyQl96ulnGLScIx
Also it was already mentioned in a comment by #Totem

Do we have command to execute multiple arguments in kubernetes

I have a pod running in kubernetes and i need to run two commands in one line.
Say,
kubectl exec -it <pod name> -n <namespace > -- bash -c redis-cli
above command will open redis-cli
i want to run one more command after exec in one line ie info, i am trying below which is not working:
kubectl exec -it <pod name> -n <namespace > -- bash -c redis-cli -- info
You have to put your command and all the parameters between apostrophes.
in your example it would be:
kubectl exec -it <pod_name> -n <namespace> -- bash -c 'redis-cli info'
From Bash manual: bash -c: If the -c option is present, then commands are read from the first non-option argument commaqnd_string.
Other option (which in my opinion is a better approach) is to get the output from the command with an instant pod, which creates, runs and deletes the pod right after that, like this:
kubectl run --namespace <YOUR_NAMESPACE> <TEMP_RANDOM_POD_NAME> --rm --tty -i --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:5.0.7-debian-10-r0 -- bash -c 'redis-cli -h redis-master -a $REDIS_PASSWORD info'
in my case the password was stored in a envvar called $REDIS_PASSWORD and I'm connecting to a server in a pod called redis-master.
I let it as I runned it to show that you can use as much parameters as needed.
POC:
user#minikube:~$ kubectl run --namespace default redis-1580466120-client --rm --tty -i --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:5.0.7-debian-10-r0 -- bash -c 'redis-cli -h redis-master -a $REDIS_PASSWORD info'
10:41:10.65
10:41:10.66 Welcome to the Bitnami redis container
10:41:10.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
10:41:10.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
10:41:10.67 Send us your feedback at containers#bitnami.com
10:41:10.67
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
# Server
redis_version:5.0.7
redis_git_sha1:00000000
redis_git_dirty:0
...
{{{suppressed output}}}
...
# CPU
used_cpu_sys:1.622434
used_cpu_user:1.313600
used_cpu_sys_children:0.013942
used_cpu_user_children:0.008014
# Cluster
cluster_enabled:0
# Keyspace
pod "redis-1580466120-client" deleted
Not get your question, do you want to get the information from redis-cli?
kubernetes exec -it <pod name> -n <namespace > -- bash -c 'redis-cli info'
Did you try to link your commands using && ?
kubernetes exec -it <pod name> -n <namespace > -- bash -c redis-cli && info

Can't connect to postgres installed from stable/postgresql helm chart

I'm trying to install postgresql via helm. I'm not overriding any settings, but when I try to connect, I get a "password authentication failed" error:
$ helm install goatsnap-postgres stable/postgresql
NAME: goatsnap-postgres
LAST DEPLOYED: Mon Jan 27 12:44:22 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
goatsnap-postgres-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/goatsnap-postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
$ kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
DCsSy0s8hM
$ kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=DCsSy0s8hM" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
psql: FATAL: password authentication failed for user "postgres"
pod "goatsnap-postgres-postgresql-client" deleted
pod default/goatsnap-postgres-postgresql-client terminated (Error)
I've tried a few other things, all of which get the same error:
Run kubectl run [...] bash, launch psql, type the password at the prompt
Run kubectl port-forward [...], launch psql locally, type the password
Un/re-install the chart a few times
Use helm install --set postgresqlPassword=[...], use the explicitly-set password
I'm using OSX 10.15.2, k8s 1.15.5 (via Docker Desktop 2.2.0.0), helm 3.0.0, postgres chart postgresql-7.7.2, postgres 11.6.0
I think I had it working before (although I don't see evidence in my scrollback), and I think I updated Docker Desktop since then, and I think I saw something about a K8s update in the notes for the Docker update. So if I'm remembering all those things correctly, maybe it's related to the k8s update?
Whoops, never mind—I found the resolution in an issue on helm/charts. The password ends up in a persistent volume, so if you uninstall and reinstall the chart, the new version retains the password from the old one, not the value from kubectl get secret. I uninstalled the chart, deleted the old PVC and PV, and reinstalled, and now I'm able to connect.