Cannot see PostgreSQL in Kubernetes Through a Browser - postgresql

I am testing a PostgreSQL configuration in kubernetes.
Windows 11
HyperV
Minikube
Everything works (or seems to work) fine
I can connect to the dabase via
kubectl exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=pg_test
Password:
psql (13.6)
Type "help" for help.
pg_test=#
I cam also view the database through DBeaver.
But when I try to connect from any browser,
localhost:5432
I get errors such as :
firefox canot connect,
ERR_CONNECTION_REFUSED
I have no proxy
when I try
kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
... this line repeats indefinitely for connections attempt
Handling connection for 5432
Handling connection for 5432
...
Here is my YAML config file
...
apiVersion: v1
data:
db: pg_test
user: admin
kind: ConfigMap
metadata:
name: postgres-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
value: admin
# valueFrom:
# secretKeyRef:
# name: postgres-secret
# key: password
- name: POSTGRES_USER
value: admin
# valueFrom:
# configMapKeyRef:
# name: postgres-config
# key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
What am I doing wrong ?

If you want to access your Postgres instance using a web browser, you need to deploy and configure something like pgAdmin.

You haven't opened the service to the internet. You were only tunneling the port to you localhost. To do so you well need one of these Kubernetes services:
Port forwarding
Nodeport: Maps a port to your hosts port.
ClusterIP: It gives your service an internal Ip to be refered to in-cluster.
LoadBalancer: Assigns an Ip or a cloud providers load balancers to the service effectively making it available to external traffic.
Since you are using Minikube, you should try a LoadBalancer or a ClusterIP.
By the way, you are creating a service without a type and you are not giving it an ip.
The important parts in a service for it to work on development are the selector labels, port and type.
Exposing an IP || Docs

Related

Can't access postgres via service from the postgres container itself

I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.
I'm doing this all on a minikube cluster.
Setup the pod and service:
$> kubectl create -f postgres-pod.yml
$> kubectl create -f postgres-service.yml
postgres-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
env: prod
creation_method: manual
domain: infrastructure
spec:
containers:
- image: postgres:13-alpine
name: kubia-postgres
ports:
- containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: dave
- name: POSTGRES_USER
value: dave
- name: POSTGRES_DB
value: tmp
# TODO:
# volumes:
# - name: postgres-db-volume
postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres
Check that the service is up kubectl get services:
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d
postgres-service ClusterIP 10.110.159.21 <none> 5432/TCP 71m
Then, log in to the postgres container:
$> kubectl exec --stdin --tty postgres -- /bin/bash
from there, attempt to hit the service's IP:
bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp
psql: error: could not connect to server: Connection refused
Is the server running on host "10.110.159.21" and accepting
TCP/IP connections on port 5432?
So using this approach I am not able to connect to the postgres server using the IP of the service.
I'm unsure of several steps in this process:
Is the selecting by name block in the service configuration yaml correct?
Can you access the IP of a service from pods that are "behind" the service?
Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?
Hello, hope you are envoying your Kubernetes journey !
I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:
First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
after this I created my cluster with this command:
kind create cluster --config=config.yaml
Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):
apiVersion: v1
kind: Namespace
metadata:
name: so-tests
From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:
1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)
2- I decided to keep using your docker image "postgres:13-alpine" and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:
❯ k exec -it postgres-0 -- bash
bash-5.1# whoami
root
bash-5.1# id
uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
bash-5.1# id postgres
uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres)
bash-5.1# exit
so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:
securityContext:
runAsUser: 70
fsGroup: 70
3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:
First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: "k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client"):
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).
N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here Create a new volume when pod restart in a statefulset)
Since it is a statefulset, it has to be exposed by a headless kubernetes service (https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services)
Here are the manifests:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
---
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
---
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
---
I deployed this using:
kubectl apply -f postgres.yaml
I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:
❯ k exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
I listed the databases:
tmp=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-------+----------+------------+------------+-------------------
postgres | dave | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
template1 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
tmp | dave | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
and I connected to the "tmp" db:
tmp=# \c tmp
Password:
You are now connected to database "tmp" as user "dave".
succesful.
I also tried to connect the database using the IP, as you tried:
bash-5.1$ ip a | grep /24
inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0
bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
succesful.
I then downloaded dbeaver (from here https://dbeaver.io/download/ ) to test the access from outside of my cluster:
with a kubectl port-forward:
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
I created the connection on dbeaver, and could access easily the db "tmp" from localhost:5361 with dave:dave credentials
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
perfect.
same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:
❯ kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
It worked as well !
I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: test
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
here is the result of the connection from inside the podtest:
bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):
StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local
i.e: postgres-0.postgres-service.so-tests.svc.cluster.local
To access the statefulsets workloads from outside the cluster here is a good start: How to expose a headless service for a StatefulSet externally in Kubernetes
Hope this will helped you. Thank you for your question.
Bguess
You cannot, at least with minikube, access the IP of a service from the pod "behind" that service if there is only one (1) replica.

Can't connect Docker Desktop Kubernetes (Windows) services to local Postgres db via Spring Boot

I am not able to connect a dockerized Spring Boot API managed by Kubernetes via Docker Desktop (Windows) to a local instance of Postgres. Error is as follows:
org.postgresql.util.PSQLException: Connection to postgres-db-svc.default.svc.cluster.local:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Up until trying to connect to a local DB, everything has been working fine (external clients can connect to pods via Ingress, pods can communicate with each other, etc).
I think somewhere my configuration is off here.
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2017-12-27T18:38:34Z
name: test-docker-config
namespace: default
resourceVersion: "810136"
uid: 352c4572-eb35-11e7-887b-42010a8002b8
data:
spring_datasource_platform: postgres
spring_datasource_url: jdbc:postgresql://postgres-db-svc.default.svc.cluster.local/sandbox
spring_datasource_username: postgres
spring_datasource_password: password
spring_jpa_properties_hibernate_dialect: org.hibernate.dialect.PostgreSQLDialect
Service
kind: Service
apiVersion: v1
metadata:
name: postgres-db-svc
spec:
type: ExternalName
externalName: kubernetes.docker.internal
ports:
- port: 5432
In my hosts file, "kubernetes.docker.internal" is mapped to 127.0.0.1
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: tester
spec:
selector:
matchLabels:
app: tester
template:
metadata:
labels:
app: tester
spec:
containers:
- name: tester
imagePullPolicy: IfNotPresent
image: test-spring-boot
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: SPRING_DATASOURCE_PLATFORM
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_platform
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_url
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_password
- name: SPRING_JPA_PROPERTIES_HIBERNATE_DIALECT
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_jpa_properties_hibernate_dialect
Spring Boot application.properties
spring.datasource.platform=${SPRING_DATASOURCE_PLATFORM:postgres}
spring.datasource.url=${SPRING_DATASOURCE_URL:jdbc:postgresql://localhost:5432/sandbox}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME:postgres}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:password}
spring.jpa.properties.hibernate.dialect=${SPRING_JPA_PROPERTIES_HIBERNATE_DIALECT:org.hibernate.dialect.PostgreSQLDialect}
Kubernetes with Docker runs in the same docker VM, so I'm assuming the /etc/hosts file that you are referring to is the one on your Windows machine.
I'm also assuming that you ran Postgres exposing 5432 with something like this:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres
First, 127.0.0.1 is not going to work because docker just exposes the ports on the IP address if the VM.
Secondly, Kubernetes will not able able to find kubernetes.docker.internal because the pods use CoreDNS to resolve and it has no idea about that domain.
I suggest you do this:
Don't use kubernetes.docker.internal as it's already used by docker for its own purpose. Use something like mypostgres.local
Get the IP address of your docker VM. Run ping docker.local or look under `C:\Windows\System32\drivers\etc\hosts for something like this:
Added by Docker Desktop
10.xx.xx.xx host.docker.internal
or look at the output from ipconfig /all and find the IP from the docker desktop VM.
Use hostAliases in your podSpec, so that the /etc/hosts file is actually modified in your pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tester
spec:
selector:
matchLabels:
app: tester
template:
metadata:
labels:
app: tester
spec:
hostAliases:
- ip: "10.xx.xx.xx" # The IP of your VM
hostnames:
- "mypostgres.local"
containers:
- name: tester
imagePullPolicy: IfNotPresent
image: test-spring-boot
resources:
limits:
memory: "128Mi"
cpu: "500m"
Don't use an ExternalName service as those work only with CoreDNS. If you want the ExternalName service to work you will have to modify your CoreDNS config so that it hardcodes the entry for mypostgres.local
Note: Another option is just to run Postgresql in Kubernetes and expose it using a regular ClusterIP service.

Kubernetes Pod with multiple containers can't connect to each other (DNS issue?!)

For our CI pipeline I setup a Kubernetes pod config (see below). There is one issue that the php app can't connect to the mysql container because it can't resolve the host "mysql".
Error message:
mysqli_connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known
pod config:
apiVersion: v1
kind: Pod
spec:
containers:
- name: php
image: docker.pkg.github.com/foo-org/bar-php/bar-php:latest
- name: nginx
image: docker.pkg.github.com/foo-org/bar-nginx/bar-nginx:latest
command:
- cat
tty: true
- name: mysql
image: docker.pkg.github.com/foo-org/bar-mysql/bar-mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: bazz
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 5
tty: true
imagePullSecrets:
- name: ci-gh-registry
This runs in GKE but I guess this doesn't make a difference?
Any ideas why and how to fix it?
provide host as 127.0.0.1 or localhost instead of mysql containers in a pod communicate over localhost

GKE pods not connecting to Cloudsql

My app can't seem to connect to the proxy thus my Cloudsql Database.
Below are my setup:
my-simple-app.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: web
name: web
spec:
replicas: 2
selector:
matchLabels:
name: web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
name: web
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- image: joelaw/nameko-hello:0.2
name: web
env:
- name: DB_HOST
value: 127.0.0.1
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
ports:
- containerPort: 3000
name: http-server
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=spheric-veric-task:asia-southeast1:authdb:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
I had setup the secrets correctly I suppose.
Below are some data that I collected from the instance:
The pod live happily:
web-69c7777c68-s2jt6 2/2 Running 0 9m
web-69c7777c68-zbwtv 2/2 Running 0 9m
When I run: kubectl logs web-69c7777c68-zbwtv -c cloudsql-proxy
It recorded this:
2019/04/04 03:25:35 using credential file for authentication; email=auth-db-user#spheric-verve-228610.iam.gserviceaccount.com
2019/04/04 03:25:35 Listening on /cloudsql/spheric-veric-task:asia-southeast1:authdb:5432/.s.PGSQL.5432 for spheric-veric-task:asia-southeast1:authdb:5432
2019/04/04 03:25:35 Ready for new connections
Since the app is not configured to connect to the db, what I did is to ssh into the pod with:
kubectl exec -it web-69c7777c68-mrdpn -- /bin/bash
# Followed by installing postgresql driver:
apt-get install postgresql
# Trying to connect to cloudsql:
psql -h 127.0.0.1 -p 5432 -U
When I run psql in the container:
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Can anyone of you kindly advise what should I do to connect to the DB?
You are specifying the instance connection string wrong, and so the proxy is listening on a unix socket in the /cloudsql/ directory instead of to a TCP port.
To tell the proxy to listen on a TCP port, use the following:
-instances=<INSTANCE_CONNECTION_NAME>=tcp:5432
Otherwise, the following format creates a unix socket (defaulting to the /cloudsql directory):
-instances=<INSTANCE_CONNECTION_NAME>

Using a pod without using the node ip

I have a postgres pod running locally on a coreOS vm.
I am able to access postgres using the ip of the minion it is on but I'm attempting to set it up in such a manner as to not have to know exactly which minion the pod is on but still be able to use postgres.
Here is my pod
apiVersion: v1
kind: Pod
metadata:
name: postgresql
labels:
role: postgres-client
spec:
containers:
- image: postgres:latest
name: postgres
ports:
- containerPort: 5432
hostPort: 5432
name: pg-port
volumeMounts:
- name: nfs
mountPath: /mnt
volumes:
- name: nfs
nfs:
server: nfs.server
path: /
and here is a service I tried to set-up but it doesn't seem correct
apiVersion: v1
kind: Service
metadata:
name: postgres-client
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres-client
I'm guessing that the selector for your service is not finding any matching backends.
Try changing
app: postgres-client
to
role: postgres-client
in the service definition (or vice versa in the pod definition above).
The label selector has to match both the key and value (i.e. role and postgres-client). See the Labels doc for more details.