Can't access postgres via service from the postgres container itself - postgresql

I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.
I'm doing this all on a minikube cluster.
Setup the pod and service:
$> kubectl create -f postgres-pod.yml
$> kubectl create -f postgres-service.yml
postgres-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
env: prod
creation_method: manual
domain: infrastructure
spec:
containers:
- image: postgres:13-alpine
name: kubia-postgres
ports:
- containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: dave
- name: POSTGRES_USER
value: dave
- name: POSTGRES_DB
value: tmp
# TODO:
# volumes:
# - name: postgres-db-volume
postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres
Check that the service is up kubectl get services:
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d
postgres-service ClusterIP 10.110.159.21 <none> 5432/TCP 71m
Then, log in to the postgres container:
$> kubectl exec --stdin --tty postgres -- /bin/bash
from there, attempt to hit the service's IP:
bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp
psql: error: could not connect to server: Connection refused
Is the server running on host "10.110.159.21" and accepting
TCP/IP connections on port 5432?
So using this approach I am not able to connect to the postgres server using the IP of the service.
I'm unsure of several steps in this process:
Is the selecting by name block in the service configuration yaml correct?
Can you access the IP of a service from pods that are "behind" the service?
Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?

Hello, hope you are envoying your Kubernetes journey !
I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:
First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
after this I created my cluster with this command:
kind create cluster --config=config.yaml
Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):
apiVersion: v1
kind: Namespace
metadata:
name: so-tests
From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:
1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)
2- I decided to keep using your docker image "postgres:13-alpine" and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:
❯ k exec -it postgres-0 -- bash
bash-5.1# whoami
root
bash-5.1# id
uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
bash-5.1# id postgres
uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres)
bash-5.1# exit
so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:
securityContext:
runAsUser: 70
fsGroup: 70
3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:
First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: "k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client"):
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).
N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here Create a new volume when pod restart in a statefulset)
Since it is a statefulset, it has to be exposed by a headless kubernetes service (https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services)
Here are the manifests:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
---
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
---
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
---
I deployed this using:
kubectl apply -f postgres.yaml
I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:
❯ k exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
I listed the databases:
tmp=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-------+----------+------------+------------+-------------------
postgres | dave | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
template1 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
tmp | dave | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
and I connected to the "tmp" db:
tmp=# \c tmp
Password:
You are now connected to database "tmp" as user "dave".
succesful.
I also tried to connect the database using the IP, as you tried:
bash-5.1$ ip a | grep /24
inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0
bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
succesful.
I then downloaded dbeaver (from here https://dbeaver.io/download/ ) to test the access from outside of my cluster:
with a kubectl port-forward:
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
I created the connection on dbeaver, and could access easily the db "tmp" from localhost:5361 with dave:dave credentials
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
perfect.
same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:
❯ kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
It worked as well !
I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: test
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
here is the result of the connection from inside the podtest:
bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):
StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local
i.e: postgres-0.postgres-service.so-tests.svc.cluster.local
To access the statefulsets workloads from outside the cluster here is a good start: How to expose a headless service for a StatefulSet externally in Kubernetes
Hope this will helped you. Thank you for your question.
Bguess

You cannot, at least with minikube, access the IP of a service from the pod "behind" that service if there is only one (1) replica.

Related

Cannot see PostgreSQL in Kubernetes Through a Browser

I am testing a PostgreSQL configuration in kubernetes.
Windows 11
HyperV
Minikube
Everything works (or seems to work) fine
I can connect to the dabase via
kubectl exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=pg_test
Password:
psql (13.6)
Type "help" for help.
pg_test=#
I cam also view the database through DBeaver.
But when I try to connect from any browser,
localhost:5432
I get errors such as :
firefox canot connect,
ERR_CONNECTION_REFUSED
I have no proxy
when I try
kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
... this line repeats indefinitely for connections attempt
Handling connection for 5432
Handling connection for 5432
...
Here is my YAML config file
...
apiVersion: v1
data:
db: pg_test
user: admin
kind: ConfigMap
metadata:
name: postgres-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
value: admin
# valueFrom:
# secretKeyRef:
# name: postgres-secret
# key: password
- name: POSTGRES_USER
value: admin
# valueFrom:
# configMapKeyRef:
# name: postgres-config
# key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
What am I doing wrong ?
If you want to access your Postgres instance using a web browser, you need to deploy and configure something like pgAdmin.
You haven't opened the service to the internet. You were only tunneling the port to you localhost. To do so you well need one of these Kubernetes services:
Port forwarding
Nodeport: Maps a port to your hosts port.
ClusterIP: It gives your service an internal Ip to be refered to in-cluster.
LoadBalancer: Assigns an Ip or a cloud providers load balancers to the service effectively making it available to external traffic.
Since you are using Minikube, you should try a LoadBalancer or a ClusterIP.
By the way, you are creating a service without a type and you are not giving it an ip.
The important parts in a service for it to work on development are the selector labels, port and type.
Exposing an IP || Docs

Can't connect Docker Desktop Kubernetes (Windows) services to local Postgres db via Spring Boot

I am not able to connect a dockerized Spring Boot API managed by Kubernetes via Docker Desktop (Windows) to a local instance of Postgres. Error is as follows:
org.postgresql.util.PSQLException: Connection to postgres-db-svc.default.svc.cluster.local:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Up until trying to connect to a local DB, everything has been working fine (external clients can connect to pods via Ingress, pods can communicate with each other, etc).
I think somewhere my configuration is off here.
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2017-12-27T18:38:34Z
name: test-docker-config
namespace: default
resourceVersion: "810136"
uid: 352c4572-eb35-11e7-887b-42010a8002b8
data:
spring_datasource_platform: postgres
spring_datasource_url: jdbc:postgresql://postgres-db-svc.default.svc.cluster.local/sandbox
spring_datasource_username: postgres
spring_datasource_password: password
spring_jpa_properties_hibernate_dialect: org.hibernate.dialect.PostgreSQLDialect
Service
kind: Service
apiVersion: v1
metadata:
name: postgres-db-svc
spec:
type: ExternalName
externalName: kubernetes.docker.internal
ports:
- port: 5432
In my hosts file, "kubernetes.docker.internal" is mapped to 127.0.0.1
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: tester
spec:
selector:
matchLabels:
app: tester
template:
metadata:
labels:
app: tester
spec:
containers:
- name: tester
imagePullPolicy: IfNotPresent
image: test-spring-boot
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: SPRING_DATASOURCE_PLATFORM
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_platform
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_url
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_datasource_password
- name: SPRING_JPA_PROPERTIES_HIBERNATE_DIALECT
valueFrom:
configMapKeyRef:
name: test-docker-config
key: spring_jpa_properties_hibernate_dialect
Spring Boot application.properties
spring.datasource.platform=${SPRING_DATASOURCE_PLATFORM:postgres}
spring.datasource.url=${SPRING_DATASOURCE_URL:jdbc:postgresql://localhost:5432/sandbox}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME:postgres}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:password}
spring.jpa.properties.hibernate.dialect=${SPRING_JPA_PROPERTIES_HIBERNATE_DIALECT:org.hibernate.dialect.PostgreSQLDialect}
Kubernetes with Docker runs in the same docker VM, so I'm assuming the /etc/hosts file that you are referring to is the one on your Windows machine.
I'm also assuming that you ran Postgres exposing 5432 with something like this:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres
First, 127.0.0.1 is not going to work because docker just exposes the ports on the IP address if the VM.
Secondly, Kubernetes will not able able to find kubernetes.docker.internal because the pods use CoreDNS to resolve and it has no idea about that domain.
I suggest you do this:
Don't use kubernetes.docker.internal as it's already used by docker for its own purpose. Use something like mypostgres.local
Get the IP address of your docker VM. Run ping docker.local or look under `C:\Windows\System32\drivers\etc\hosts for something like this:
Added by Docker Desktop
10.xx.xx.xx host.docker.internal
or look at the output from ipconfig /all and find the IP from the docker desktop VM.
Use hostAliases in your podSpec, so that the /etc/hosts file is actually modified in your pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tester
spec:
selector:
matchLabels:
app: tester
template:
metadata:
labels:
app: tester
spec:
hostAliases:
- ip: "10.xx.xx.xx" # The IP of your VM
hostnames:
- "mypostgres.local"
containers:
- name: tester
imagePullPolicy: IfNotPresent
image: test-spring-boot
resources:
limits:
memory: "128Mi"
cpu: "500m"
Don't use an ExternalName service as those work only with CoreDNS. If you want the ExternalName service to work you will have to modify your CoreDNS config so that it hardcodes the entry for mypostgres.local
Note: Another option is just to run Postgresql in Kubernetes and expose it using a regular ClusterIP service.

How to get into postgres in Kubernetes with local dev minikube

Can please somebody help me? This is my first post here, and I am really exited to start posting here and helping people but I need help first.
I am deploying my own Postgres database on Minikube. For db, password and username I am using secrets.
Data is encoded with base64
POSTGRES_USER = website_user
POSTGRES_DB = website
POSTGRES_PASSWORD = pass
I also exec into container to see if I could see these envs and they were there.
The problem is when I try to enter into postgres with psql. I checked minikube ip and typed correct password(pass) after this command:
pqsl -h 192.168.99.100 -U website_user -p 31315 website
Error
Password for user website_user:
psql: FATAL: password authentication failed for user "website_user"
Also if I exec into my pod:
kubectl exec -it postgres-deployment-744fcdd5f5-7f7vx bash
And try to enter into postgres I get:
psql -h $(hostname -i) -U website_user -p 5432 website
Error:
Password for user website_user:
psql: FATAL: password authentication failed for user "website_user"
I am lacking something here.I tried also ps aux in container, and everything seems to be find postgres processes are running
kubectl get all
Output:
NAME READY STATUS RESTARTS AGE
pod/postgres-deployment-744fcdd5f5-7f7vx 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
service/postgres-service NodePort 10.109.235.114 <none> 5432:31315/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/postgres-deployment 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/postgres-deployment-744fcdd5f5 1 1 1 18m
# Secret store
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
POSTGRES_USER: d2Vic2l0ZV91c2VyCg==
POSTGRES_PASSWORD: cGFzcwo=
POSTGRES_DB: d2Vic2l0ZQo=
---
# Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres-pv
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: postgres-pv
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_USER
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_DB
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_PASSWORD
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-volume-mount
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
ports:
- port: 5432
protocol: TCP
targetPort: 5432
type: NodePort
You created all your values with:
$ echo "value" | base64
which instead you should use: $ echo -n "value" | base64
Following official man page of echo:
Description
Echo the STRING(s) to standard output.
-n = do not output the trailing newline
TL;DR: You need to edit your Secret definition with new values:
$ echo -n "website_user" | base64
$ echo -n "website" | base64
$ echo -n "pass" | base64
You created your Secret with a trailing newline. Please take a look at below example:
POSTGRES_USER:
$ echo "website_user" | base64
output: d2Vic2l0ZV91c2VyCg== which is the same as yours
$ echo -n "website_user" | base64
output: d2Vic2l0ZV91c2Vy which is the correct value
POSTGRES_PASSWORD:
$ echo "pass" | base64
output: cGFzcwo= which is the same as yours
$ echo -n "pass" | base64
output: cGFzcw== which is the correct value
POSTGRES_DB:
$ echo "website" | base64
output: d2Vic2l0ZQo= which is the same as yours
$ echo -n "website" | base64
output: d2Vic2l0ZQ== which is the correct value
Your Secret should look like that:
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
POSTGRES_USER: d2Vic2l0ZV91c2Vy
POSTGRES_PASSWORD: cGFzcw==
POSTGRES_DB: d2Vic2l0ZQ==
If you create it with a new Secret you should be able to connect to the database:
root#postgres-deployment-64d697868c-njl7q:/# psql -h $(hostname -i) -U website_user -p 5432 website
Password for user website_user:
psql (9.6.6)
Type "help" for help.
website=#
Please take a look on additional links:
Github.com: Kubernetes: issues: Config map vs secret to store credentials for Postgres deployment
Kubernetes.io: Secrets

How to run pgAdmin in OpenShift?

I'm trying to run a pgAdmin container (the one I'm using comes from here) in an OpenShift cluster where I don't have admin privileges and the admin does not want to allow containers to run as root for security reasons.
The error I'm currently receiving looks like this:
Error with Standard Image
I created a Dockerfile that creates that directory ahead of time based on the image linked above and I get this error:
Error with Edited Image
Is there any way to run pgAdmin within OpenShift? I want to be able to let DB admins log into the instance of pgAdmin and configure the DB from there, without having to use the OpenShift CLI and port forwarding. When I use that method the port-forwarding connection drops very frequently.
Edit1:
Is there a way that I should edit the Dockerfile and entrypoint.sh file found on pgAdmin's github?
Edit2:
It looks like this is a bug with pgAdmin... :/
https://www.postgresql.org/message-id/15470-c84b4e5cc424169d%40postgresql.org
To work around these errors, you need to add a writable volume to the container and set pgadmin's configuration to use that directory.
Permission Denied: '/var/lib/pgadmin'
Permission Denied: '/var/log/pgadmin'
The OpenShift/Kubernetes YAML example below demonstrates this by supplying a custom /pgadmin4/config_local.py as documented here. This allows you to run the image as a container with regular privileges.
Note the configuration files base directory (/var/lib/pgadmin/data) still needs to be underneath the mount point (/var/lib/pgadmin/), as pgadmin's initialization code tries to create/change ownership of that directory which is not allowed on mount point directories inside the container.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
labels:
app: pgadmin-app
name: pgadmin
type: Opaque
stringData:
username: admin
password: DEFAULT_PASSWORD
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.pgadmin: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"pgadmin"}}'
labels:
app: pgadmin-app
name: pgadmin
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: pgadmin-app
name: pgadmin
data:
config_local.py: |-
import os
_BASEDIR = '/var/lib/pgadmin/data'
LOG_FILE = os.path.join(_BASEDIR, 'logfile')
SQLITE_PATH = os.path.join(_BASEDIR, 'sqlite.db')
STORAGE_DIR = os.path.join(_BASEDIR, 'storage')
SESSION_DB_PATH = os.path.join(_BASEDIR, 'sessions')
servers.json: |-
{
"Servers": {
"1": {
"Name": "postgresql",
"Group": "Servers",
"Host": "postgresql",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "dbuser",
"SSLMode": "prefer",
"SSLCompression": 0,
"Timeout": 0,
"UseSSHTunnel": 0,
"TunnelPort": "22",
"TunnelAuthentication": 0
}
}
}
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: pgadmin
labels:
app: pgadmin-app
spec:
replicas: 1
selector:
app: pgadmin-app
deploymentconfig: pgadmin
template:
metadata:
labels:
app: pgadmin-app
deploymentconfig: pgadmin
name: pgadmin
spec:
serviceAccountName: pgadmin
containers:
- env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
key: username
name: pgadmin
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: pgadmin
- name: PGADMIN_LISTEN_PORT
value: "5050"
- name: PGADMIN_LISTEN_ADDRESS
value: 0.0.0.0
image: docker.io/dpage/pgadmin4:4
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
name: pgadmin
ports:
- containerPort: 5050
protocol: TCP
readinessProbe:
failureThreshold: 10
initialDelaySeconds: 3
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /pgadmin4/config_local.py
name: pgadmin-config
subPath: config_local.py
- mountPath: /pgadmin4/servers.json
name: pgadmin-config
subPath: servers.json
- mountPath: /var/lib/pgadmin
name: pgadmin-data
- image: docker.io/openshift/oauth-proxy:latest
name: pgadmin-oauth-proxy
ports:
- containerPort: 5051
protocol: TCP
args:
- --http-address=:5051
- --https-address=
- --openshift-service-account=pgadmin
- --upstream=http://localhost:5050
- --cookie-secret=bdna987REWQ1234
volumes:
- name: pgadmin-config
configMap:
name: pgadmin
defaultMode: 0664
- name: pgadmin-data
emptyDir: {}
- apiVersion: v1
kind: Service
metadata:
name: pgadmin-oauth-proxy
labels:
app: pgadmin-app
spec:
ports:
- name: 80-tcp
protocol: TCP
port: 80
targetPort: 5051
selector:
app: pgadmin-app
deploymentconfig: pgadmin
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pgadmin-app
name: pgadmin
spec:
port:
targetPort: 80-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: pgadmin-oauth-proxy
Openshift by default doesn't allow to run containers with root privilege, you can add Security Context Constraints (SCC) to the user anyuid for the project where you are deploying the container.
Adding a SCC for the project:
$ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:default
scc "anyuid" added to: ["system:serviceaccount:data-base-administration:default"]
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
PGAdmin deployed:
$ oc describe pod pgadmin4-4-fjv4h
Name: pgadmin4-4-fjv4h
Namespace: data-base-administration
Priority: 0
PriorityClassName: <none>
Node: host/IP
Start Time: Mon, 18 Feb 2019 23:22:30 -0400
Labels: app=pgadmin4
deployment=pgadmin4-4
deploymentconfig=pgadmin4
Annotations: openshift.io/deployment-config.latest-version=4
openshift.io/deployment-config.name=pgadmin4
openshift.io/deployment.name=pgadmin4-4
openshift.io/generated-by=OpenShiftWebConsole
openshift.io/scc=anyuid
Status: Running
IP: IP
Controlled By: ReplicationController/pgadmin4-4
Containers:
pgadmin4:
Container ID: docker://ID
Image: dpage/pgadmin4#sha256:SHA
Image ID: docker-pullable://docker.io/dpage/pgadmin4#sha256:SHA
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 18 Feb 2019 23:22:37 -0400
Ready: True
Restart Count: 0
Environment:
PGADMIN_DEFAULT_EMAIL: secret
PGADMIN_DEFAULT_PASSWORD: secret
Mounts:
/var/lib/pgadmin from pgadmin4-1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-74b75 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin4-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-74b75:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-74b75
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned data-base-administration/pgadmin4-4-fjv4h to host
Normal Pulling 51m kubelet, host pulling image "dpage/pgadmin4#sha256:SHA"
Normal Pulled 51m kubelet, host Successfully pulled image "dpage/pgadmin4#sha256:SHA"
Normal Created 51m kubelet, host Created container
Normal Started 51m kubelet, host Started container
I have already replied to similar issue for local installation OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
For docker image, you can map the /pgadmin4/config_local.py using environment variables, Check Mapped Files and Directories section on the https://hub.docker.com/r/dpage/pgadmin4/
This might work if you create a pgadmin user via the Dockerfile, and give it permission to write to /var/log/pgadmin.
You can create a user in the Dockerfile using the RUN command; something like this:
# Create pgadmin user
ENV_HOME=/pgadmin
RUN mkdir -p ${HOME} && \
mkdir -p ${HOME}/pgadmin && \
useradd -u 1001 -r -g 0 -G pgadmin -d ${HOME} -s /bin/bash \
-c "Default Application User" pgadmin
# Set user home and permissions with group 0 and writeable.
RUN chmod -R 700 ${HOME} && chown -R 1001:0 ${HOME}
# Create the log folder and set permissions
RUN mkdir /var/log/pgadmin && \
chmod 0600 /var/log/pgadmin && \
chown 1001:0 /var/log/pgadmin
# Run as 1001 (pgadmin)
USER 1001
Adjust your pgadmin install so it runs as 1001, and I think you should be set.

Trouble connecting to postgres from outside Kubernetes cluster

I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.