Kubernetes service not working as expected - postgresql

I am failing to deploy postgres (single node, official image) on kubernetes and allow services to access postgres via ClusterIP service.
The config is rather simple - Namespace, Deployment, Service:
---
apiVersion: v1
kind: Namespace
metadata:
name: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: database
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.1
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: pg
namespace: database
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
name: postgres
port: 5432
targetPort: 5432
To test is executed a "/bin/bash" into the pod and ran a simple psql command to test the connection. All works well locally:
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -U admin postgresdb -c "\t"
Tuples only is on.
But as soon as I try to access postgres via service, the command fails:
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -h pg -U admin postgresdb -c "\t"
psql: could not connect to server: Connection timed out
Is the server running on host "pg" (10.245.102.15) and accepting
TCP/IP connections on port 5432?
This is tested on a DigitalOcean a single node cluster (1.12.3).
Postgres listened on * on the correct port, pg_hba.conf looks by default like this:
...
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
To reproduce see this gist
Execute via (please use a fresh cluster and read thru):
export k8sconf=/path/to/your/k8s/confic/file
kubectl --kubeconfig $k8sconf apply -f https://gist.githubusercontent.com/sontags/c364751e7f0d8ba1a02a9805efc68db6/raw/01b1808348541d743d6a861402cfba224bee8971/database.yaml
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- /bin/bash /reproducer/runtest.sh
Any hint why the service does not allow to connect or other tests to perform?

Hard to tell without access to your cluster. This works fine on my AWS cluster. Some things to look at:
Is the kube-proxy running on all nodes?
Is your network overlay/CNI running on all nodes?
Does this happen with the pg pod only? what about other pods?
DNS seems to be fine since pg is being resolved to 10.245.102.15
Are your nodes allowing IP forwarding from the Linux side?
Are your Digital Ocean firewall rules allowing traffic from any source on port 5432? Note that the PodCidr and K8s Service IP range is different the hostCidr (of your droplets).

Related

pgadmin remotely access postresql service in kubernetes

I launch a pod from rancher and my pgsql daemon is running fine.
Then ingres is set up with a target (pod name) and port 5432
Then use kubectl to start port forwarding
After these steps are completed, I can access the db from within the kubernetes cluster using
kubectl exec -it pod/<pod_name> -n <ns_name> -- psql -U postgres
This ran fine.
Then I tried to connect to the db using pgadmin on my laptop. It always failed with
Unable to connect to server:
Could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "pgsql.kube.xx.yy.com" (###.##.###.##) and accepting
TCP/IP connections on port 5432?
I can connect to db from another pod in the k8s cluster.
this works for me in another pod:
./psql --host <pod.ip> -U postgres -d metastore -p 5432
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/creatorId: u-abcdefg
field.cattle.io/ingressState: '{"c######M=":"p#####q:sslcerts","c#####g==":"statefulset:pgsql-###:pgsql-####"}'
field.cattle.io/publicEndpoints: '[{"addresses":["##.##.##.##"],"port":443,"protocol":"HTTPS","serviceName":"pgsql-###:ingress-5###","ingressName":"pgsql-###:posgres-ingres","hostname":"pgsql-##.kube.##.###.###s","allNodes":true}]'
creationTimestamp: "2021-11-10T19:01:30Z"
generation: 5
labels:
cattle.io/creator: norman
name: posgres-ingres
namespace: pgsql-###
resourceVersion: "343048085"
selfLink: /apis/extensions/v1beta1/namespaces/pgsql-###/ingresses/posgres-ingres
uid: 10###-###-########-######
spec:
rules:
- host: pgsql-##.kube.##.##.##
http:
paths:
- backend:
serviceName: ingress-########
servicePort: 5432
tls:
- hosts:
- pgsql-###.kube.##.##.###
secretName: sslcerts
status:
loadBalancer:
ingress:
- ip: ##.##.##.##
- ip: ##.##.##.##
- ip: ###.###.###.###
- ip: ###.###.###.###
Your suggestions would be greatly appreciated.
As stated in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
PostgreSQL is a SQL database and typically doesn't use http as protocol. From the Postgres documentation.
PostgreSQL uses a message-based protocol for communication between frontends and backends (clients and servers). The protocol is supported over TCP/IP and also over Unix-domain sockets.

How to access a database that is only accesible from Kubernetes cluster locally?

I have a situation where I have a Kubernetes cluster that has access to a Postgres instance (which is not run in the Kubernetes cluster). The Postgres instance is not accessible from anywhere else.
What I would like to do is connect with my Database tools locally. What I have found is kubectl port-forward but I think this would only be a solution if the Postgres instance is run as a pod. What I basically need is a Pod, that forwards everything that is sent on Port 8432 to the postgres instance and then I could use the port forward.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
What is the right way to do this?
You can create service for your postgresql instance:
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: ipAddressOfYourPGInstance
ports:
- port: 5432
And then use:
kubectl port-forward service/postgresql 5432:5432
you can use the Postgres client to connect with the Postgres instance and expose that pod using the ingress and you can access the UI over the URL.
for Postgres client, you can use: https://hub.docker.com/r/dpage/pgadmin4/
you can set this as pgclient and use it

Connect to Kubernetes mongo db in different namespace

Can anyone point out how to connect to the mongo db instance using mongo client using either command line client or from .net core programs with connection strings?
We have created a sample cluster in digitalocean with a namespace, let's say mongodatabase.
We installed the mongo statefulset with 3 replicas. We are able to successfully connect with the below command
kubectl --kubeconfig=configfile.yaml -n mongodatabase exec -ti mongo-0 mongo
But when we connect from a different namespace or from default namespace with the pod names in the below format, it doesn't work.
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0.mongo.mongodatabase.cluster.svc.local mongo
where mongo-0.mongo.mongodatabase.cluster.svc.local is in pod-0.service_name.namespace.cluster.svc.local (also tried pod-0.statfulset_name.namespace.cluster.svc.local and pod-0.service_name.statefulsetname.namespace.cluster.svc.local) etc.,
Can any one help with the correct dns name/connection string to be used while connecting with mongo client in command line and also from the programs like java/.net core etc.,?
Also should we use kubernetes deployment instead of statefulsets here?
You need to reference the mongo service by namespaced dns. So if your mongo service is mymongoapp and it is deployed in mymongonamespace, you should be able to access it as mymongoapp.mymongonamespace.
To test, I used the bitnami/mongodb docker client. As follows:
From within mymongonamespace, this command works
$ kubectl config set-context --current --namespace=mymongonamespace
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
But when I switched to namespace default it didn't work
$ kubectl config set-context --current --namespace=default
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
Qualifying the host with the namespace then works
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp.mymongonamespace
This is how you can get inside mongo-0 pod
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0 sh
I think you are looking for this DNS for Services and Pods.
You can have a fully qualified domain name (FQDN) for a Services or for a Pod.
Also please have a look at this kubernetes: Service located in another namespace, as I think it will provide you with answer on how to access it from different namespace.
An example would look like this:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster’s KubeDNS Server also returns an A record for the Pod’s fully qualified hostname. For example, given a Pod with the hostname set to “busybox-1” and the subdomain set to “default-subdomain”, and a headless Service named “default-subdomain” in the same namespace, the pod will see its own FQDN as “busybox-1.default-subdomain.my-namespace.svc.cluster.local”. DNS serves an A record at that name, pointing to the Pod’s IP. Both pods “busybox1” and “busybox2” can have their distinct A records.
The Endpoints object can specify the hostname for any endpoint addresses, along with its IP.
Note: Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster.local), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.
Your question about Deployments vs StatefulSets should be a different question. But the answer is that the StatefulSet is used when you want "Stable Persistent Storage" kubernetes.io.
Also from the same page "stable is synonymous with persistence across Pod (re)scheduling". So basically your mongo instance is backed by a PeristentVolume and you want the volume reattached after the pod is rescheduled.

"istio-init" required to run before additional initContainers

using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.
Using the cluster IP of the database doesn't work either. But I can connect to the database from my computer using port-forwarding.
This container is fairly simple:
spec:
initContainers:
- name: create-database
image: tmaier/postgresql-client
args:
- sh
- -c
- |
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE ROLE user WITH LOGIN PASSWORD 'password';"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "GRANT ALL PRIVILEGES ON DATABASE fusionauth TO user; ALTER DATABASE fusionauth OWNER TO user;"
This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?
The error message in the init-container is:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
The same command from fully initialized pod works just fine.
You can't access services inside the mesh without the Envoy sidecar, your init container runs alone with no sidecars. In order to reach the DB service from an init container you need to expose the DB with a ClusterIP service that has a different name to the Istio Virtual Service of that DB.
You could create a service named db-direct like:
apiVersion: v1
kind: Service
metadata:
name: db-direct
labels:
app: db
spec:
type: ClusterIP
selector:
app: db
ports:
- name: db
port: 5432
protocol: TCP
targetPort: 5432
And in your init container use db-direct:5432.

How to connect MySQL running on Kubernetes

I have deployed my application on Google gcloud container engine. My application required MySQL. Application is running fine and connecting to MySQL correctly.
But I want to connect MySQL database from my local machine using MySQL Client (Workbench, or command line), Can some one help me how to expose this to local machine? and how can I open MySQL command line on gcloud shell ?
I have run below command but external ip is not there :
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
EDIT
I am using below yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
Try the kubectl port-forward command.
In your case; kubectl port-forward app-mysql-3323704556-nce3w 3306:3306
See The documentation for all available options.
There are 2 steps involved:
1 ) You first perform port forwarding from localhost to your pod:
kubectl port-forward <your-mysql-pod-name> 3306:3306 -n <your-namespace>
2 ) Connect to database:
mysql -u root -h 127.0.0.1 -p <your-password>
Notice that you might need to change 127.0.0.1 to localhost - depends on your setup.
If host is set to:
localhost - then a socket or pipe is used.
127.0.0.1 - then the client is forced to use TCP/IP.
You can check if your database is listening for TCP connections with netstat -nlp.
Read more in:
Cant connect to local mysql server through socket tmp mysql sock
Can not connect to server
To add to the above answer, when you add --address 0.0.0.0 kubectl should open 3306 port to the INTERNET too (not only localhost)!
kubectl port-forward POD_NAME 3306:3306 --address 0.0.0.0
Use it with caution for short debugging sessions only, on development systems at most. I used it in the following situation:
colleague who uses Windows
didn't have ssh key ready
environment was a playground I was not afraid to expose to the world
You need to add a service to your deployment. The service will add a load balancer with a public ip in front of your pod, so it can be accessed over the public internet.
See the documentation on how to add a service to a Kubernetes deployment. Use the following code to add a service to your app-mysql deployment:
kubectl expose deployment/app-mysql
You may also need to configure your MySQL service so it allows remote connections. See this link on how to enable remote access on MySQL server: