How to access pgsql from pgadmin on kubernetes - kubernetes

I am playing with kubenetes. I have created a StatefulSet running postgresql. I have created a service with ClusterIP: None. I have launched a pod with pgadmin4. I can get to pgadmin from my browser. When I try to get to my pgsql server from pgadmin, it tells me that either the ip or the port are not accessible. The error message displays the ip address, so I know that it is resolving the right pod name.
This is MicroK8s on Ubuntu.
Here are my configs.
--- pomodoro-pgsql StatefulSet ---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pomodoro-pgsql
namespace: pomodoro-services
spec:
selector:
matchLabels:
app: pomodoro-pgsql
env: development
serviceName: pomodoro-pgsql
replicas: 1
template:
metadata:
labels:
app: pomodoro-pgsql
env: development
spec:
containers:
- name: pomodoro-pgsql
image: localhost:32000/pomodoro-pgsql
env:
- name: POSTGRES_PASSWORD
value: blahblah
- name: POSTGRES_USER
value: blahblah
- name: POSTGRES_DB
value: blahblah
ports:
- name: pgsql
containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteOnce
--- pomodoro-pgsql Headless Service ---
apiVersion: v1
kind: Service
metadata:
name: pomodoro-pgsql
namespace: pomodoro-services
spec:
clusterIP: None
selector:
app: pomodoro-pgsql
env: development
ports:
- name: pgsql
port: 5432
--- pgadmin4 Pod --
apiVersion: v1
kind: Pod
metadata:
name: pomodoro-pgadmin
namespace: pomodoro-services
labels:
env: development
spec:
containers:
- name: pomodoro-pgadmin
image: localhost:32000/pomodoro-pgadmin
env:
- name: PGADMIN_DEFAULT_EMAIL
value: blahblah
- name: PGADMIN_DEFAULT_PASSWORD
value: blahblah
imagePullPolicy: IfNotPresent
restartPolicy: Always
--- pgadmin4 Service ---
apiVersion: v1
kind: Service
metadata:
name: pomodoro-pgadmin
namespace: pomodoro-services
spec:
type: NodePort
ports:
- port: 5002
targetPort: 80
selector:
app: pomodoro-pgadmin
env: development
I am able to see the ip addresses through dig
microk8s.kubectl run `
--namespace pomodoro-services `
-it srvlookup `
--image=tutum/dnsutils --rm `
--restart=Never `
-- dig SRV pomodoro-pgsql.pomodoro-services.svc.cluster.local
Here is the error from pgadmin. Note that the IP is correct for the pod.
Unable to connect to server:
could not connect to server: Operation timed out
Is the server running on host "pomodoro-pgsql-0.pomodoro-pgsql.pomodoro-
services.svc.cluster.local" (10.10.10.219) and accepting
TCP/IP connections on port 5432?
Here are the logs from the pgsql pod
2019-01-02 04:23:05.576 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2019-01-02 04:23:05.576 UTC [1] LOG: listening on IPv6 address "::", port 5432
2019-01-02 04:23:05.905 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-01-02 04:23:06.430 UTC [21] LOG: database system was shut down at 2019-01-01 20:01:36 UTC
2019-01-02 04:23:06.630 UTC [1] LOG: database system is ready to accept connections
As requested, here are the results from kubectl get services (IPs have been changed.)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pomodoro-pgadmin NodePort 10.1.18.45 <none> 5002:30437/TCP 12h
pomodoro-pgsql ClusterIP None <none> 5432/TCP 46h
pomodoro-ping-rapi ClusterIP 10.1.18.36 <none> 8888/TCP 47h
[update 1/2/2019] I connected to another container in the cluster and tried to telnet, and then to psql into postgres. I could not connect with either program. I could run psql on the container running the postgresql server. My current theory is that the server has exposed 5432 locally, but it is filtered from other pods.
I have confirmed that /var/lib/postgresql/data/postgresql.conf contains the following:
listen_addresses = '*'
Using microk8s.kubctl port-forward pomodoro-pgsql-0 5432:5432 I was able to connect to 5432 through telnet.
_> telnet localhost 5432
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
[update 1/2/2019]
Results kubctl exec pomodoro-pgsql-0 -- nslookup pomodoro-pgsql
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'pomodoro-pgsql': Try again
command terminated with exit code 1
Results kubctl exec pomodoro-pgsql-0 -- nslookup pomodoro-pgsql-0
Name: pomodoro-pgsql-0
Address 1: 10.1.1.19 pomodoro-pgsql-0.pomodoro-pgsql.pomodoro-services.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Note: IPs change when computer is restarted.

The problem was a firewall rule for the computer running microk8s. I found out that this is documented on their web page where the docs tell us to do this:
sudo iptables -P FORWARD ACCEPT
sudo apt-get install iptables-persistent

Related

Cannot see PostgreSQL in Kubernetes Through a Browser

I am testing a PostgreSQL configuration in kubernetes.
Windows 11
HyperV
Minikube
Everything works (or seems to work) fine
I can connect to the dabase via
kubectl exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=pg_test
Password:
psql (13.6)
Type "help" for help.
pg_test=#
I cam also view the database through DBeaver.
But when I try to connect from any browser,
localhost:5432
I get errors such as :
firefox canot connect,
ERR_CONNECTION_REFUSED
I have no proxy
when I try
kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
... this line repeats indefinitely for connections attempt
Handling connection for 5432
Handling connection for 5432
...
Here is my YAML config file
...
apiVersion: v1
data:
db: pg_test
user: admin
kind: ConfigMap
metadata:
name: postgres-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
value: admin
# valueFrom:
# secretKeyRef:
# name: postgres-secret
# key: password
- name: POSTGRES_USER
value: admin
# valueFrom:
# configMapKeyRef:
# name: postgres-config
# key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
What am I doing wrong ?
If you want to access your Postgres instance using a web browser, you need to deploy and configure something like pgAdmin.
You haven't opened the service to the internet. You were only tunneling the port to you localhost. To do so you well need one of these Kubernetes services:
Port forwarding
Nodeport: Maps a port to your hosts port.
ClusterIP: It gives your service an internal Ip to be refered to in-cluster.
LoadBalancer: Assigns an Ip or a cloud providers load balancers to the service effectively making it available to external traffic.
Since you are using Minikube, you should try a LoadBalancer or a ClusterIP.
By the way, you are creating a service without a type and you are not giving it an ip.
The important parts in a service for it to work on development are the selector labels, port and type.
Exposing an IP || Docs

Kubernetes deployment works but can't connect to postgresql from pgAdmin4

I have a deployment and service yaml file. I use minikube to run Kubernetes in my WSL.
postgres-deployment.yaml:
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
replicas: 1
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
# Volume Claim
volumeClaimTemplates:
- metadata:
name: postgresql-db-disk
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 25Gi
postgres-service.yaml:
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgresql-db
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
I run them with:
# kubectl apply -f postgres-deployment.yaml
# kubectl apply -f postgres-service.yaml
The deployment works, I get the Cluster IP of the service with kubectl get all.
I run the pgAdmin with the command:
docker run -p 80:80
-e 'PGADMIN_DEFAULT_EMAIL=user#domain.com'
-e 'PGADMIN_DEFAULT_PASSWORD=SuperSecret'
-d dpage/pgadmin4
I try to connect to the postgres but I am unable to connect.
EDIT:
I changed the user for connection to postgres, still doesn't works.
I tried to change the LoadBalancer to ClusterIp and NodePort, it doesn't work either.
I tried to change my OS to Ubuntu, in case of some weird WSL issues, it doesn't work either.
To access the Postgres locally, I have to use NodePort.
We need to find the NodePort ip and port.
To find the nodeport internal-ip, do:
$ kubectl get nodes -o wide
For the port we can do kubectl describe svc postgres-db-lb or kubectl get svc.
In pgAdmin the hostname should-be <node-ip>:<node-port>.
We can also do minikube service postgres-db-lb to find the url.
EDIT
Or more simply minikube service <NAME_OF_SERVICE>.

Kubernetes Pod can't connect to local postgres server with Hasura image

I'm following this tutorial to connect a hasura kubernetes pod to my local postgres server.
When I create the deployment, the pod's container fails to connect to postgres (CrashLoopBackOff and keeps retrying), but doesn't give any reason why. Here are the logs:
{"type":"pg-client","timestamp":"2020-05-03T06:22:21.648+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasura
hasuraService: custom
name: hasura
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hasura
template:
metadata:
creationTimestamp: null
labels:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v1.2.0
imagePullPolicy: IfNotPresent
name: hasura
env:
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://USER:#localhost:5432/my_db
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
ports:
- containerPort: 8080
protocol: TCP
resources: {}
I'm using postgres://USER:#localhost:5432/MY_DB as the postgres url - is "localhost" the correct address here?
I verified that the above postgres url works when I try (no password):
> psql postgres://USER:#localhost:5432/my_db
psql (12.2)
Type "help" for help.
> my_db=#
How else can I troubleshoot it? The logs aren't very helpful...
If I got you correctly, the issue is that the Pod (from "inside" the Minikube ) can not access Postgres installed on Host machine (the one that runs Minikube itself) via localhost.
If that is the case, please check this thread .
... Minikube VM can access your host machine’s localhost on 192.168.99.1 (127.0.0.1 from Minikube would still be a Minicube's localhost).
Technically, for the Pod the localhost is Pod itself. The Host machine and Minikube are connected via bridge. You can find out exact ip addresses and routes with the infconfig and route -n on your Minikube Host.

Connection Refused on Kubernetes Pod (Plex)

Kubernetes setup on a baremetal three node local cluster.
Plex deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: plex
labels:
app: plex
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: plex
template:
metadata:
labels:
name: plex
spec:
containers:
- name: plex
image: plexinc/pms-docker:plexpass
imagePullPolicy: Always
ports:
- containerPort: 32400
hostPort: 32400
volumeMounts:
- name: nfs-plex-meta
mountPath: "/data"
- name: nfs-plex
mountPath: "/config"
volumes:
- name: nfs-plex-meta
persistentVolumeClaim:
claimName: nfs-plex-meta
- name: nfs-plex
persistentVolumeClaim:
claimName: nfs-plex
Deployment is happy. Pod is happy.
I've tried exposing the Pod via NodePort, ClusterIP, HostPort, LoadBallancer (metalDB) and in every permutation, I get a connection refused error in the browser or via Curl.
NodePort Example:
$ kubectl expose deployment plex --type=NodePort --name=plex
service/plex exposed
$ kubectl describe svc plex
Name: plex
Namespace: default
Labels: app=plex
Annotations: <none>
Selector: name=plex
Type: NodePort
IP: 10.111.13.7
Port: <unset> 32400/TCP
TargetPort: 32400/TCP
NodePort: <unset> 30275/TCP
Endpoints: 10.38.0.0:32400
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ curl 10.111.13.7:32400
curl: (7) Failed to connect to 10.111.13.7 port 32400: Connection refused
$ curl 10.38.0.0:32400
curl: (7) Failed to connect to 10.38.0.0 port 32400: Connection refused
$ curl 192.168.1.11:32400
curl: (7) Failed to connect to 192.168.1.110 port 32400: Connection refused
$ curl 192.168.1.11:30275
curl: (7) Failed to connect to 192.168.1.110 port 30275: Connection refused
What am I missing here?
So of those, only the last might be right. The IP in that output is a cluster IP, which usually (though not always, it’s up to your CNI plugin and config) is only accessible inside the cluster from other pods. NodePort means that the service is also reachable on that port on any node IP. This might be getting blocked by a firewall on your node though, so check that. Also make sure that’s a valid node IP.

Trouble connecting to postgres from outside Kubernetes cluster

I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.