Connect to Postgresql from inside kubernetes cluster - postgresql

I setup a series of VM 192.168.2.(100,105,101,104) where kubernetes master is on 100 and two workers on 101,104. Also setup the postgres on 192.168.2.105, followed this tutorial but it is still unreachable from within. Tried it in minikube inside a test VM where minikube and postgres were installed in the same VM, worked just fine.
Changed the postgers config file from localhost to *, changed listen at pg_hba.conf to 0.0.0.0/0
Installed postgesql-12 and postgresql-client-12 in the VM 192.168.2.105:5432, now i added headless service to kubernetes which is as follows
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
------
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
in my deployment I am defining this to access database
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: 'my-service:5432'
- name: DB_DATABASE
value: postgres
- name: DB_PASSWORD
value: admin
- name: DB_SCHEMA
value: public
- name: DB_USER
value: postgres
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
Also the VMs are bridged, not on NAT.
What i am doing wrong here ?

The first thing we have to do is create the headless service with custom endpoint. The IP in my solution is only specific for my machine.
Endpoint with service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
As for my particular specs, I haven't defined any ingress or loadbalancer so i'll change the selector type from LoadBalancer to NodePort in the service after its deployed.
Now i deployed the keycloak with the the mentioned .yaml file
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin" # TODO give username for master realm
- name: KEYCLOAK_PASSWORD
value: "admin" # TODO give password for master realm
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: # <Node-IP>:<LoadBalancer-Port/ NodePort>
- name: DB_DATABASE
value: "keycloak" # Database to use
- name: DB_PASSWORD
value: "admin" # Database password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: "postgres" # Database user
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
After mentioning all the possible values, it connects successfully to the postgres server that is hosted on another server away from kubernetes master and workers node !

Related

Can we create multiple instances of a Kubernetes cluster & access it through different external IPs?

In Brief, I want to create multiple instances of a pod contains MongoDB & Mongo-express containers which can be accessed through different external IPs. Even when we change anything from mongo-express GUI then the change should be reflected into that particular instance's Mongo database not others. That means, each instance should create a separate volume also.
My current YAML code is below: (Now it creates a single instance only and can be access through the localhost)
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "pass"
volumeMounts:
- name: mongo-initdb
mountPath: /docker-entrypoint-initdb.d
restartPolicy: Always
volumes:
- name: mongo-initdb
configMap:
name: mongo-initdb
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
hostname: mongo-express
subdomain: mongodb-service
containers:
# Container for Mongo-Service
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
value: "admin"
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
value: "pass"
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
You don't need to create multiple instances of a cluster, you just need 2 instances of your application. Essentially, two copies of your application that will be isolated in different namespaces, and both will be reachable from the outside through externalIP.
Let's call these namespaces instance-1 and instance-2. This way, you can even use same resource names. It will work for pods because their IP addresses will be different since the internal DNS name uses namespace as one of the parameters. E-g mongo-express-service.instance1 will be a different DNS name than mongo-express-service.instance2. And it will be different for ConfigMaps because they are namespaced as well, you can have the same name if you want in both namespaces. Same true for your Deployment and StatefulSet.
That being said, you can very easily set an externalIP on your LoadBalancer type service (mongo-express) by adding the following field to your Service manifest:
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 0.0.0.0 #<======= CHANGE_ME
Simply create two separate namespaces:
kubectl create namespace instance-1
kubectl create namespace instance-2
Then configure the right externalIP field for each service. E-g
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 1.1.1.1 #<======= CHANGE_ME
and for your second instance
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 2.2.2.2 #<======= CHANGE_ME
Now, let's assume you will have 2 copies of your YAML file, first one, my-application-1.yaml which will have externalIP set to 1.1.1.1 and all other resources the same. Another one called my-application-2.yaml with externalIP set to 2.2.2.2 and all the rest of the resources the same.
Now you can apply all components of your application once to your instance-1 namespace.
# Assuming this is the service with externalIP 1.1.1.1
kubectl apply -n instance-1 my-application-1.yaml
And the same components, with modified externalIP to another namespace:
# Assuming this is the service with externalIP 2.2.2.2
kubectl apply -n instance-2 my-application-2.yaml
Now you can reach your services easily on external IPs at the endpoints:
1.1.1.1:8081 # This is instance 1
2.2.2.2:8081 # This is instance 2
I hope this helps!

Why GKE shows backend instances not healthy?

Following this blog post I created a GKE Kubernetes cluster.
Successively I deployed Keycloak istances and if I use a load balancer:
apiVersion: v1
kind: Service
metadata:
labels:
app: load-balancer
name: load-balancer
namespace: keycloak
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: keycloak
selector:
app: keycloak
I can reach Keyclok.
After that following the Google documentation I created an Ingress for accessing to Keycloak with HTTPS:
managed-certificate.yaml
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: keycloak
spec:
domains:
- mydomain.com
- www.mydomain.com
keycloak-service.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak-service
namespace: keycloak
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: keycloak
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 8443
externalTrafficPolicy: Cluster
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: keycloak
annotations:
kubernetes.io/ingress.global-static-ip-name: keycloak
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: keycloak-service
port:
number: 443
I added liveness and readiness probes to the deployment definition of keycloak too.
But with this configuration GKE says that backend istances ar unhealthy, even if they are healthy and running:
I've read in some related questions on StackOverflow that is a issue with NAG. Should I add the firewall rules for NAG and Ingress?
If it is the point, which could be the rules?
EDIT: keycloak-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak-deployment
namespace: keycloak
labels:
app: keycloak
spec:
replicas: 2
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:latest
env:
- name: DB_VENDOR
value: "POSTGRES"
- name: DB_ADDR
value: "postgres"
- name: DB_DATABASE
value: "keycloak"
- name: DB_USER
value: "keycloak"
- name: DB_SCHEMA
value: "public"
- name: DB_PASSWORD
value: "password"
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "password"
- name: KEYCLOAK_STATISTICS
value: all
- name: JDBC_PARAMS
value: "useSSL=false"
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
- name: JGROUPS_DISCOVERY_PROPERTIES
value: datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500,initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING ( own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, created timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))"
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
startupProbe:
httpGet:
path: /health
port: 9990
initialDelaySeconds: 120
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 9990
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 9990
successThreshold: 3

Not able to connect to SQL container using service sqlservice. Not able to figure out problem. Everything looks fine

apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: secretssql
key: pass
volumeMounts:
- name: mysqlvolume
mountPath: "/var/lib/mysql"
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: sqlpvc
---
apiVersion: v1
kind: Secret
metadata:
name: secretssql
data:
# You can include additional key value pairs as you do with Opaque Secrets
pass: YWRtaW4=
---
apiVersion: v1
kind: Service
metadata:
name: sqlservice
spec:
selector:
app: mysql
ports:
- port: 80
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.I want to connect to sql container using service sqlservice.
Your service is using port 80:
ports:
- port: 80
while your pod is listening on port 3306:
ports:
- containerPort: 3306
Try adjusting your service to user port 3306:
ports:
- port: 3306
targetPort: 3306

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

Kubernetes service as env var to frontend usage

I'm trying to configure kubernetes and in my project I've separeted UI and API.
I created one Pod and I exposed both as services.
How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?
I can't use localhost because the communication isn't between containers.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: <how can I set the API address here?>
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
services.yaml
apiVersion: v1
kind: Service
metadata:
name: api
labels:
name: api
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 5000
targetPort: 5000
nodePort: 30001
selector:
name: project
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
name: ui
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 80
targetPort: 5003
nodePort: 30003
selector:
name: project
The service IP is already available in a environment variable inside the pod, because Kubernetes initializes a set of environment variables for each service that exists at that moment.
To list all the environment variables of a pod
kubectl exec <pod-name> env
If the pod was created before the service you must delete it and create it again.
Since you named your service api, one of the variables the command above should list is API_SERVICE_HOST.
But you don't really need to lookup the service IP address inside environment variables. You can simply use the service name as the hostname. Any pod can connect to the service api, simply by calling api.default.svc.cluster (assuming your service is in the default namespace).
I created an Ingress to solve this issue and point to DNS instead of IP.
ingres.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.project.com
http:
paths:
- backend:
serviceName: ui
servicePort: 5003
- host: api.project.com
http:
paths:
- backend:
serviceName: api
servicePort: 5000
deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: https://api.project.com
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url