how to connect to postgresql on Kubernetes cluster? - postgresql

I have deployed my crunchy db postgresq on my Kubernetes cluster.
However, I am not sure how to connect to the database remotely.
What command can I use to connect remotely so I can create a new database?
Is there a kubectl command to go with psql?

I was able to look at another forum and found what I needed. I executed with the pod name and gets me to a bash prompt.
kubectl exec -it <POD_NAME> bash

kubctl get pods
kubectl exec -it <POD_NAME> bash
su postgres
psql
in the above postgres is user name.
you will get:
postgres=#

Related

Configure command to use for shell on pod

In k9s: Is there a way to configure the command which is used when starting a shell inside the pod?
I have looked their docu and briefly browsed the source without hints how a shell command could be specified.
kubectl command-line tool must be configured to communicate with your cluster.
Create the Pod:
kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
Verify that the container is running:
kubectl get pod shell-demo
Get a shell to the running container:
kubectl exec --stdin --tty shell-demo -- /bin/bash
In your shell, list the root directory:
Run this command inside the container:
ls /
Please refer to this document Configure command to use for shell on pod

how to login as root to running pod as root in kubernetes

I tried multiple syntax including one given below , no luck yet
kubectl exec -u root -it testpod -- bash
Error: unknown shorthand flag: 'u' in -u
See 'kubectl exec --help' for usage.
it is version 1.22
There is no option available in kubectl exec to mention the user
Because it is decided at either in the container image or in the pod.spec.containers.securityContext.runAsUser field
so to achieve what youy want is on a running container then do just kubectl exec -it testpod -- bash and then issue su - root from inside the container

kubectl -- create -h works but kubectl create -h doesn't

Running 'minikube' on windows 10, why minikube kubectl create -h doesn' work but minikube kubectl -- create -h does (w.r.t. showing help for create)
This is the way minikube works:
Minikube has a subcommand kubectl that will exectute the kubectl bundled with minikube (because you can also have one installed outside of minikube, on your plain system).
Minikube has to know the exact command to pass to its kubectl, thus minikube splits the command with --.
It's used to differentiate minikube arguments and kubectl's.

Can't connect to postgres installed from stable/postgresql helm chart

I'm trying to install postgresql via helm. I'm not overriding any settings, but when I try to connect, I get a "password authentication failed" error:
$ helm install goatsnap-postgres stable/postgresql
NAME: goatsnap-postgres
LAST DEPLOYED: Mon Jan 27 12:44:22 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
goatsnap-postgres-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/goatsnap-postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
$ kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
DCsSy0s8hM
$ kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=DCsSy0s8hM" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
psql: FATAL: password authentication failed for user "postgres"
pod "goatsnap-postgres-postgresql-client" deleted
pod default/goatsnap-postgres-postgresql-client terminated (Error)
I've tried a few other things, all of which get the same error:
Run kubectl run [...] bash, launch psql, type the password at the prompt
Run kubectl port-forward [...], launch psql locally, type the password
Un/re-install the chart a few times
Use helm install --set postgresqlPassword=[...], use the explicitly-set password
I'm using OSX 10.15.2, k8s 1.15.5 (via Docker Desktop 2.2.0.0), helm 3.0.0, postgres chart postgresql-7.7.2, postgres 11.6.0
I think I had it working before (although I don't see evidence in my scrollback), and I think I updated Docker Desktop since then, and I think I saw something about a K8s update in the notes for the Docker update. So if I'm remembering all those things correctly, maybe it's related to the k8s update?
Whoops, never mind—I found the resolution in an issue on helm/charts. The password ends up in a persistent volume, so if you uninstall and reinstall the chart, the new version retains the password from the old one, not the value from kubectl get secret. I uninstalled the chart, deleted the old PVC and PV, and reinstalled, and now I'm able to connect.

Kubernetes Connectivity

I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.