Kubernetes Endpoint with SSL - kubernetes

Is it possible to use SSL in Endpoint?
I have an Azure Database for MySQL, which requires an SSL certificate for connection. I use the following:
https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
On Kubernetes I have run a NodeJS Pod which communicate to MySQL by an Endpoint like that:
kind: Service
apiVersion: v1
metadata:
name: mysql-remote
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql-remote
subsets:
- addresses:
- ip: xx.xxx.xxx.xx
ports:
- port: 3306
My deplomyent.yaml file looks like that:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs
spec:
replicas: 2
template:
metadata:
labels:
app: nodejs
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: nodejs
image: xxx
ports:
- containerPort: 80
name: nodejs
env:
- name: HOST
value: "mysql-remote"
- name: USER
valueFrom:
secretKeyRef:
name: nodejssecret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: nodejssecret
key: password
First I tried to mount the certificate as a Volume to NodeJS. That looks very awkward if I use for example 10 different Applications.
Is there an easy way to use the SSL?

Review the Security on Azure Kubernetes Service (AKS) part in deploying-a-stateful-application-on-azure-kubernetes-service-aks article.
The idea is to create secter from BaltimoreCyberTrustRoot.crt.pem cert and use it in the deployment
- name: database__connection__ssl
valueFrom:
secretKeyRef:
name: ssl-cert
key: BaltimoreCyberTrustRoot.crt.pem

Related

Minikube Kubernetes, Postgres, Spring Boot Cluster - Postgres connection refused

so I have a basic minikube cluster configuration for K8s cluster with only 2 pods for Postgres DB and my Spring app. However, I can't get my app to connect to my DB. I know that in Docker such issue could be solved with networking but after a lot of research I can't seem to find the problem and the solution to my issue.
Currently, given my configuration I get a Connection refused error by postgres whenever my Spring App tries to start:
Caused by: org.postgresql.util.PSQLException: Connection to postgres-service:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
So my spring-app is a basic REST API with some open endpoints where I query for some data. The app works completely fine and here is my application.properties:
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
The way I create my Postgres component is by creating a ConfigMap, a Secret and finally a Deployment with it's Service inside. They look like so:
postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
postgres-url: postgres-service
postgres-port: "5432"
postgres-db: "test"
postgres-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
postgres_user: cm9vdA== #already encoded in base64
postgres_password: cm9vdA== #already encoded in base64
postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgresdb
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
and finally here's my Deployment with it's Service for my spring app
spring-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-deployment
labels:
app: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-app
image: app #image is pulled from my docker hub
ports:
- containerPort: 8080
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-url
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-port
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
A connection refused means that the host you are connecting to, does not have the port you mentioned opened.
This leads me to think that the postgres pod isnt running correctly, or the service is not pointing to those pods correctly.
By checking the Yamls I can see that the service's pod selector isnt configured correctly:
The service is selecting pods with label: app.kubernetes.io/name: postgres
The deployment is configured with pods with label: app: postgres
The correct service manifest should look like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can double check that by describing the service using kubectl describe service postgres-service.
The output should contain the postgres pods IPs for Endpoints.

How to apply the imported realm configuration file of keycloak when deploying on k8s

My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.
try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/

Cannot connect to my MiniKube external service ip/port?

I have a mongo yaml and web-app(NodeJS) yaml set up like this:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 8080
targetPort: 27017
and the webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
# default ClusterIP
# nodeport = external service
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
I ran the commands for each file
kubectl apply -f
i checked the status of the webapp which returned:
app listening on port 3000!
I got the IP address by
minikube ip
and the port was 30100
Why cannot not I access this web app?
I get a site cant be reached error.
If you are on Mac, check your minikube driver. I had to stop, delete minikube, then restart while specifying the hyperkit driver like so.
minikube stop
minikube delete
docker start --vm-driver=hyperkit
The information listed here is pretty useful too.

Not able to connect to SQL container using service sqlservice. Not able to figure out problem. Everything looks fine

apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: secretssql
key: pass
volumeMounts:
- name: mysqlvolume
mountPath: "/var/lib/mysql"
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: sqlpvc
---
apiVersion: v1
kind: Secret
metadata:
name: secretssql
data:
# You can include additional key value pairs as you do with Opaque Secrets
pass: YWRtaW4=
---
apiVersion: v1
kind: Service
metadata:
name: sqlservice
spec:
selector:
app: mysql
ports:
- port: 80
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.I want to connect to sql container using service sqlservice.
Your service is using port 80:
ports:
- port: 80
while your pod is listening on port 3306:
ports:
- containerPort: 3306
Try adjusting your service to user port 3306:
ports:
- port: 3306
targetPort: 3306

Expose database to deployment on GKE

I have a deployment running a pod that needs access to a postgres database I am running in the same cluster as the kubernetes cluster. How do I create a service that selects the deployment such that it has access. My pods keep restarting as the connection times out. I have created firewall rules in the vpc subnet to allow internal communication and have modified pg_hba.conf and postgresql.conf
My deployment definition is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
name: server
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: gcr.io/api:v1
ports:
- containerPort: 80
env:
- name: DB_HOSTNAME
valueFrom:
secretKeyRef:
name: api-config
key: hostname
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: api-config
key: username
- name: DB_NAME
valueFrom:
secretKeyRef:
name: api-config
key: name
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: api-config
key: password
This is my service definition to expose the database but I don't think I am selecting the deployment. I have followed the example here.
kind: Service
apiVersion: v1
metadata:
name: postgres
label:
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: postgres
subsets:
- addresses:
- ip: 10.0.0.50
ports:
- port: 5432
You can use the following to expose database to deployment on GKE:
$ kubectl expose deployment name-of-db --type=LoadBalancer --port 80 --target-port 8080