using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.
Using the cluster IP of the database doesn't work either. But I can connect to the database from my computer using port-forwarding.
This container is fairly simple:
spec:
initContainers:
- name: create-database
image: tmaier/postgresql-client
args:
- sh
- -c
- |
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE ROLE user WITH LOGIN PASSWORD 'password';"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "GRANT ALL PRIVILEGES ON DATABASE fusionauth TO user; ALTER DATABASE fusionauth OWNER TO user;"
This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?
The error message in the init-container is:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
The same command from fully initialized pod works just fine.
You can't access services inside the mesh without the Envoy sidecar, your init container runs alone with no sidecars. In order to reach the DB service from an init container you need to expose the DB with a ClusterIP service that has a different name to the Istio Virtual Service of that DB.
You could create a service named db-direct like:
apiVersion: v1
kind: Service
metadata:
name: db-direct
labels:
app: db
spec:
type: ClusterIP
selector:
app: db
ports:
- name: db
port: 5432
protocol: TCP
targetPort: 5432
And in your init container use db-direct:5432.
Related
I launch a pod from rancher and my pgsql daemon is running fine.
Then ingres is set up with a target (pod name) and port 5432
Then use kubectl to start port forwarding
After these steps are completed, I can access the db from within the kubernetes cluster using
kubectl exec -it pod/<pod_name> -n <ns_name> -- psql -U postgres
This ran fine.
Then I tried to connect to the db using pgadmin on my laptop. It always failed with
Unable to connect to server:
Could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "pgsql.kube.xx.yy.com" (###.##.###.##) and accepting
TCP/IP connections on port 5432?
I can connect to db from another pod in the k8s cluster.
this works for me in another pod:
./psql --host <pod.ip> -U postgres -d metastore -p 5432
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/creatorId: u-abcdefg
field.cattle.io/ingressState: '{"c######M=":"p#####q:sslcerts","c#####g==":"statefulset:pgsql-###:pgsql-####"}'
field.cattle.io/publicEndpoints: '[{"addresses":["##.##.##.##"],"port":443,"protocol":"HTTPS","serviceName":"pgsql-###:ingress-5###","ingressName":"pgsql-###:posgres-ingres","hostname":"pgsql-##.kube.##.###.###s","allNodes":true}]'
creationTimestamp: "2021-11-10T19:01:30Z"
generation: 5
labels:
cattle.io/creator: norman
name: posgres-ingres
namespace: pgsql-###
resourceVersion: "343048085"
selfLink: /apis/extensions/v1beta1/namespaces/pgsql-###/ingresses/posgres-ingres
uid: 10###-###-########-######
spec:
rules:
- host: pgsql-##.kube.##.##.##
http:
paths:
- backend:
serviceName: ingress-########
servicePort: 5432
tls:
- hosts:
- pgsql-###.kube.##.##.###
secretName: sslcerts
status:
loadBalancer:
ingress:
- ip: ##.##.##.##
- ip: ##.##.##.##
- ip: ###.###.###.###
- ip: ###.###.###.###
Your suggestions would be greatly appreciated.
As stated in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
PostgreSQL is a SQL database and typically doesn't use http as protocol. From the Postgres documentation.
PostgreSQL uses a message-based protocol for communication between frontends and backends (clients and servers). The protocol is supported over TCP/IP and also over Unix-domain sockets.
I have a situation where I have a Kubernetes cluster that has access to a Postgres instance (which is not run in the Kubernetes cluster). The Postgres instance is not accessible from anywhere else.
What I would like to do is connect with my Database tools locally. What I have found is kubectl port-forward but I think this would only be a solution if the Postgres instance is run as a pod. What I basically need is a Pod, that forwards everything that is sent on Port 8432 to the postgres instance and then I could use the port forward.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
What is the right way to do this?
You can create service for your postgresql instance:
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: ipAddressOfYourPGInstance
ports:
- port: 5432
And then use:
kubectl port-forward service/postgresql 5432:5432
you can use the Postgres client to connect with the Postgres instance and expose that pod using the ingress and you can access the UI over the URL.
for Postgres client, you can use: https://hub.docker.com/r/dpage/pgadmin4/
you can set this as pgclient and use it
In my current non-Kubernetes environment, if I need to access the Postgres database, I just setup an SSH tunnel with:
ssh -L 5432:localhost:5432 user#domain.com
I'm trying to figure out how to do something similar in a test Kubernetes cluster I am setting up in EKS, that doesn't involve a great security risk. For example, creating a path in the ingress control to the databases port is a terrible idea.
The cluster would be setup where Postgres is in a pod, but all of the data is on persistent volume claim so that the data persists when the pod is destroyed.
How would one use pgAdmin to access the database in this kind of setup?
The kubectl command can forward TCP ports into a POD via the kube-api
kubectl port-forward {postgres-pod-ID} 5432:5432
If you are not using a cluster-admin user, the user will need to be bound to a role that allows it to create pods/portforward in the pods namespace.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pg-portforward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
I have similar kind of setup. My postgreSQL database is deployed in a pod in kubernetes cluster running in AWS cloud. Following are the steps that I performed to access this remote database from pgAdmin running in my local machine (Followed this video).
1 -> Modify the Service type of db from ClusterIP to NodePort as shown below.
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: db
ports:
- name: dbport
port: 5432
nodePort: 31000
type: NodePort
2-> Add new rule to security group of any node(EC2 Instance) of your Kubernetes cluster as shown below.
3-> Connect to this remote database using the public IPv4 address of the node(EC2 Instance).
HostAddress: IPv4 address of the node ec2 instance.
Port: 31000
I am failing to deploy postgres (single node, official image) on kubernetes and allow services to access postgres via ClusterIP service.
The config is rather simple - Namespace, Deployment, Service:
---
apiVersion: v1
kind: Namespace
metadata:
name: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: database
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.1
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: pg
namespace: database
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
name: postgres
port: 5432
targetPort: 5432
To test is executed a "/bin/bash" into the pod and ran a simple psql command to test the connection. All works well locally:
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -U admin postgresdb -c "\t"
Tuples only is on.
But as soon as I try to access postgres via service, the command fails:
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -h pg -U admin postgresdb -c "\t"
psql: could not connect to server: Connection timed out
Is the server running on host "pg" (10.245.102.15) and accepting
TCP/IP connections on port 5432?
This is tested on a DigitalOcean a single node cluster (1.12.3).
Postgres listened on * on the correct port, pg_hba.conf looks by default like this:
...
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
To reproduce see this gist
Execute via (please use a fresh cluster and read thru):
export k8sconf=/path/to/your/k8s/confic/file
kubectl --kubeconfig $k8sconf apply -f https://gist.githubusercontent.com/sontags/c364751e7f0d8ba1a02a9805efc68db6/raw/01b1808348541d743d6a861402cfba224bee8971/database.yaml
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- /bin/bash /reproducer/runtest.sh
Any hint why the service does not allow to connect or other tests to perform?
Hard to tell without access to your cluster. This works fine on my AWS cluster. Some things to look at:
Is the kube-proxy running on all nodes?
Is your network overlay/CNI running on all nodes?
Does this happen with the pg pod only? what about other pods?
DNS seems to be fine since pg is being resolved to 10.245.102.15
Are your nodes allowing IP forwarding from the Linux side?
Are your Digital Ocean firewall rules allowing traffic from any source on port 5432? Note that the PodCidr and K8s Service IP range is different the hostCidr (of your droplets).
I have deployed my application on Google gcloud container engine. My application required MySQL. Application is running fine and connecting to MySQL correctly.
But I want to connect MySQL database from my local machine using MySQL Client (Workbench, or command line), Can some one help me how to expose this to local machine? and how can I open MySQL command line on gcloud shell ?
I have run below command but external ip is not there :
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
EDIT
I am using below yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
Try the kubectl port-forward command.
In your case; kubectl port-forward app-mysql-3323704556-nce3w 3306:3306
See The documentation for all available options.
There are 2 steps involved:
1 ) You first perform port forwarding from localhost to your pod:
kubectl port-forward <your-mysql-pod-name> 3306:3306 -n <your-namespace>
2 ) Connect to database:
mysql -u root -h 127.0.0.1 -p <your-password>
Notice that you might need to change 127.0.0.1 to localhost - depends on your setup.
If host is set to:
localhost - then a socket or pipe is used.
127.0.0.1 - then the client is forced to use TCP/IP.
You can check if your database is listening for TCP connections with netstat -nlp.
Read more in:
Cant connect to local mysql server through socket tmp mysql sock
Can not connect to server
To add to the above answer, when you add --address 0.0.0.0 kubectl should open 3306 port to the INTERNET too (not only localhost)!
kubectl port-forward POD_NAME 3306:3306 --address 0.0.0.0
Use it with caution for short debugging sessions only, on development systems at most. I used it in the following situation:
colleague who uses Windows
didn't have ssh key ready
environment was a playground I was not afraid to expose to the world
You need to add a service to your deployment. The service will add a load balancer with a public ip in front of your pod, so it can be accessed over the public internet.
See the documentation on how to add a service to a Kubernetes deployment. Use the following code to add a service to your app-mysql deployment:
kubectl expose deployment/app-mysql
You may also need to configure your MySQL service so it allows remote connections. See this link on how to enable remote access on MySQL server: