Unable to login to Postgres inside Kubernetes cluster from the outside - postgresql

I want to simply login to a postgres db from outside my K8 cluster. I'm created the following config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PORT
value: '5432'
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
---
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432
Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5432: "default/postgres-srv:5432"
I've checked kubectl get services and attempted to use the endpoint and the cluster-ip. Neith of these worked.
psql "postgresql://postgres:password#[ip]:5432/postgres"
The pod is running and the logs say everything is ready. Anything I'm missing here? I'm running the cluster in digital ocean.
edit:
I want to be able to access the DB from my host. (sub.domain.com) I've bounced the deployments and still can't get in. The only config I've targeted is what is shown above. I do have an A record for my domain and can access my other exposed pods via my ingress nginx service

You can expose TCP and UDP services with ingress-nginx configuration.
For example using GKE with ingress-nginx, nfs-server-provisioner and the bitnami/postgresql helm charts:
kubectl create secret generic -n default postgresql \
--from-literal=postgresql-password=$(openssl rand -base64 32) \
--from-literal=postgresql-replication-password=$(openssl rand -base64 32)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install -n default postgres bitnami/postgresql \
--set global.storageClass=nfs-client \
--set existingSecret=postgresql
Patch the ingress-nginx tcp-services ConfigMap:
kubectl patch cm -n ingress-nginx tcp-services -p '{"data": {"5432": "default/postgres-postgresql:5432"}}'
Update the controllers Service for the proxied port (i.e. kubectl edit svc -n ingress-nginx ingress-nginx):
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
Note: you may have to update the existing ingress-nginx controller deployments args (i.e. kubectl edit deployments.apps -n ingress-nginx nginx-ingress-controller) to include --tcp-services-configmap=ingress-nginx/tcp-services and bounce the ingress-nginx controller if you edit the deployment spec (i.e. kubectl scale deployment -n ingress-nginx --replicas=0 && kubectl scale deployment -n ingress-nginx --replicas=3).
Test the connection:
export PGPASSWORD=$(kubectl get secrets -n default postgresql -o jsonpath={.data.postgresql-password} |base64 -d)
docker run --rm -it \
-e PGPASSWORD=${PGPASSWORD} \
--entrypoint psql \
--network host \
postgres:13-alpine -U postgres -d postgres -h example.com
Note: I manually created an A record in Google Cloud DNS to resolve the hostname to the clusters external IP.
Update: in addition to creating the ingress-nginx config, installing the bitnami/postgresql chart etc. it was necessary to Disable "Proxy Protocol" on the Load Balancer to get the connections working for a deployment in DigitalOcean (postgres will LOG: invalid length of startup packet otherwise):

Related

I'm unable to access k8's Service through Postman

I created a simple scala application, that uses akkaHttp to add user data and get user data, akkaHttp is running on "localhost" and has port "8080".
Http().newServerAt("localhost", 8080).bind(route)
After that I create a "deployment" and "service" given below:
deployment.yaml
`apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-deployment
spec:
replicas: 2
selector:
matchLabels:
app: docker-label
template:
metadata:
name: docker-pod
labels:
app: docker-label
spec:
containers:
- name: docker-container
image: akkahttp-k8s:0.1.0-SNAPSHOT
ports:
- containerPort: 80
`
service.yaml
`apiVersion: v1
kind: Service
metadata:
name: docker-service
spec:
type: NodePort
selector:
app: docker-label
ports:
- port: 8080
targetPort: 80
nodePort: 32100`
My service and deployment are in "running" state and showing no error.
pods:
NAME READY STATUS RESTARTS AGE
docker-deployment-79756959c6-rdknq 1/1 Running 0 6s
docker-deployment-79756959c6-thjzt 1/1 Running 0 6s
service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-service NodePort 10.103.170.1 <none> 8080:32100/TCP 4m35s
But when I try to access the service through the "Postman" it throughs an error " Error: connect ECONNREFUSED 192.168.49.2:32100 ".
When I tried to use " port-forward docker-deployment-79756959c6-rdknq 8080:8080 " then I can interact with my pod successfully through postman using " http://localhost:8080 ". Why I'm unable to interact with my pod through service ? where I'm doing mistake ?.
Kindly help me to deal with this issue.
I'm using a generic container that listens on 8080:
https://github.com/kubernetes-up-and-running/kuard
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-deployment
spec:
replicas: 2
selector:
matchLabels:
app: docker-label
template:
metadata:
name: docker-pod
labels:
app: docker-label
spec:
containers:
- name: docker-container
image: gcr.io/kuar-demo/kuard-amd64:blue
ports:
- containerPort: 8080
NOTE containerPort: 8080
and service.yaml:
apiVersion: v1
kind: Service
metadata:
name: docker-service
spec:
type: NodePort
selector:
app: docker-label
ports:
- port: 8080
targetPort: 8080
NOTE targetPort: 8080 (matches the Pod's port which is the container's port)
Then:
NAMESPACE="75488823"
kubectl create namespace ${NAMESPACE}
kubectl apply --filename=./deployment.yaml \
--namespace=${NAMESPACE}
kubectl apply --file=./service.yaml \
--namespace=${NAMESPACE}
Then access the service using kubectl port-forward
kubectl port-forward deployment/docker-deployment \
--namespace=${NAMESPACE} \
8080:8080
And:
curl \
--silent \
--get \
--output /dev/null \
--write-out '%{response_code}' \
http://localhost:8080
Should yield 200 (success)
Then access the services using any (!) nodes' IP address. You may need to determine the {HOST} value for yourself. You want any node's public IP address.
# Get any nodes' IP
HOST=$(\
kubectl get nodes \
--output=jsonpath="{.items[0].status.addresses[0].address}"
# Get the service's `nodePort`
PORT=$(\
kubectl get service/docker-service \
--namespace=${NAMESPACE} \
--output=jsonpath="{.spec.ports[0].nodePort}")
curl \
--silent \
--get \
--output /dev/null \
--write-out '%{response_code}' \
http://${HOST}:${PORT}
Should also yield 200 (success).
Tidy:
kubectl delete namespace/${NAMESPACE}

Kubernetes deployment works but can't connect to postgresql from pgAdmin4

I have a deployment and service yaml file. I use minikube to run Kubernetes in my WSL.
postgres-deployment.yaml:
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
replicas: 1
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
# Volume Claim
volumeClaimTemplates:
- metadata:
name: postgresql-db-disk
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 25Gi
postgres-service.yaml:
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgresql-db
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
I run them with:
# kubectl apply -f postgres-deployment.yaml
# kubectl apply -f postgres-service.yaml
The deployment works, I get the Cluster IP of the service with kubectl get all.
I run the pgAdmin with the command:
docker run -p 80:80
-e 'PGADMIN_DEFAULT_EMAIL=user#domain.com'
-e 'PGADMIN_DEFAULT_PASSWORD=SuperSecret'
-d dpage/pgadmin4
I try to connect to the postgres but I am unable to connect.
EDIT:
I changed the user for connection to postgres, still doesn't works.
I tried to change the LoadBalancer to ClusterIp and NodePort, it doesn't work either.
I tried to change my OS to Ubuntu, in case of some weird WSL issues, it doesn't work either.
To access the Postgres locally, I have to use NodePort.
We need to find the NodePort ip and port.
To find the nodeport internal-ip, do:
$ kubectl get nodes -o wide
For the port we can do kubectl describe svc postgres-db-lb or kubectl get svc.
In pgAdmin the hostname should-be <node-ip>:<node-port>.
We can also do minikube service postgres-db-lb to find the url.
EDIT
Or more simply minikube service <NAME_OF_SERVICE>.

Not able to access service by service name on kubernetes

I am using below manifest. I am having a simple server which prints pod name on /hello. Here, I was going through kubernetes documentation and it mentioned that we can access service via service name as well. But that is not working for me. As this is a service of type NodePort, I am able to access it using IP of one of the nodes. Is there something wrong with my manifest?
apiVersion: apps/v1
kind: Deployment
metadata:
name: myhttpserver
labels:
day: zero
name: httppod
spec:
replicas: 1
selector:
matchLabels:
name: httppod
day: zero
template:
metadata:
labels:
day: zero
name: httppod
spec:
containers:
- name: myappcont
image: agoyalib/trial:tryit
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: servit
labels:
day: zeroserv
spec:
type: NodePort
selector:
day: zero
name: httppod
ports:
- name: mine
port: 8080
targetPort: 8090
Edit: I created my own mini k8s cluster and I am doing these operations on the master node.
From what I understand when you say
As this is a service of type NodePort, I am able to access it using IP of one of the nodes
You're accessing your service from outside your cluster. That's why you can't access it using its name.
To access a service using its name, you need to be inside the cluster.
Below is an example where you use a pod based on centos in order to connect to your service using its name :
# Here we're just creating a pod based on centos
$ kubectl run centos --image=centos:7 --generator=run-pod/v1 --command sleep infinity
# Now let's connect to that pod
$ kubectl exec centos -ti bash
[root#centos /]# curl servit:8080/hello
You need to be inside cluster meaning you can access it from another pod.
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup servit

Make rabbitmq cluster publicly accesible

I am using this helm chart to configure rabbitmq on k8s cluster:
https://github.com/helm/charts/tree/master/stable/rabbitmq
How can I make cluster accessible thru public endpoint? Currently, I have a cluster with below configurations. I am able to access the management portal by given hostname (publicly endpoint, which is fine). But, when I checked inside the management portal cluster can be accessible by internal IP and/or hostname which is: rabbit#rabbitmq-0.rabbitmq-headless.default.svc.cluster.local and rabbit#<private_ip>. I want to make cluster public so all other services which are outside of VNET can connect to it.
helm install stable/rabbitmq --name rabbitmq \
--set rabbitmq.username=xxx \
--set rabbitmq.password=xxx \
--set rabbitmq.erlangCookie=secretcookie \
--set rbacEnabled=true \
--set ingress.enabled=true \
--set ingress.hostName=rabbitmq.xxx.com \
--set ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set resources.limits.memory="256Mi" \
--set resources.limits.cpu="100m"
I was not tried with Helm but I was build and deploy to Kubernetes directly from .yaml configure file. So I only followed the Template of Helm
For publish you RabbitMQ service out of cluster
1, You need to have an external IP:
If you using Google Cloud, run these commands:
gcloud compute addresses create rabbitmq-service-ip --region asia-southeast1
gcloud compute addresses describe rabbitmq-service-ip --region asia-southeast1
>address: 35.240.xxx.xxx
Change rabbitmq-service-ip to the name you want, and change the region to your own.
2, Configure Helm parameter
service.type=LoadBalancer
service.loadBalancerSourceRanges=35.240.xxx.xxx/32 # IP address you got from gcloud
service.port=5672
3, Deploy and try to telnet to your RabbitMQ service
telnet 35.240.xxx.xxx 5672
Trying 35.240.xxx.xxx...
Connected to 149.185.xxx.xxx.bc.googleusercontent.com.
Escape character is '^]'.
Gotcha! It's worked
FYI:
Here is base template if you want to create .yaml and deploy without Helm
service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
type: LoadBalancer
loadBalancerIP: 35.xxx.xxx.xx
ports:
# the port that this service should serve on
- port: 5672
name: rabbitmq
targetPort: 5672
nodePort: 32672
selector:
name: rabbitmq
deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
replicas: 1
template:
metadata:
labels:
name: rabbitmq
annotations:
prometheus.io/scrape: "false"
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.6.8-management
ports:
- containerPort: 5672
name: rabbitmq
securityContext:
capabilities:
drop:
- all
add:
- CHOWN
- SETGID
- SETUID
- DAC_OVERRIDE
readOnlyRootFilesystem: true
- name: rabbitmq-exporter
image: kbudde/rabbitmq-exporter
ports:
- containerPort: 9090
name: exporter
nodeSelector:
beta.kubernetes.io/os: linux
Hope this help!
From your Helm values passed, I see that you have configured your RabbitMQ service with an Nginx Ingress.
You should create a DNS record with your ingress.hostName (rabbitmq.xxx.com) directed to the ingress IP (if GCP) or CNAME (if AWS) of your nginx-ingress load-balancer. That DNS hostname (rabbitmq.xx.com) is your public endpoint to access your RabbitMQ service.
Ensure that your nginx-ingress controller is running in your cluster in order for the ingresses to work. If you are unfamiliar with ingresses:
- Official Ingress Docs
- Nginx Ingress installation guide
- Nginx Ingress helm chart
Hope this helps!

Trouble connecting to postgres from outside Kubernetes cluster

I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.