How to connect postgresql from app using helm and kubernetes? - postgresql

I am really struggling regarding how my application which is deployed in --dev namespace can connect to postgreSQL database which I deployed independently using helm with --database namespace. What I did so far is as below.
Database and myapp deployed different namespace. I just copy the name PGHOST,PGPASSWORD from some examples but I am not sure where should I use this name and is that has to be same somewhere in postgreSQL?
Should I take care anything else to connect database or is there anything that is not best practice? Should I add a namespace to jdbc url?
Locally we connect to database using below parameters but what should be the way after we deploy our application via helm? We are using sequelize as a client library
const connectionString = postgres://${global.config.database_username}:${global.config.database_password}#${global.config.database_host}:${global.config.database_port}/${global.config.database_name};
postgres values
## Specify PGDATABASE
##
DBName: db
After I deployed postgres;
# of replicas: 3
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
jdbc url: jdbc:postgresql://my-postgres-postgresql-helm:port
deployment.yaml
- name: PGHOST
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-configmap
key: jdbc-url
- name: PGDATABASE
value: {{ .Values.postgres.database name | quote }}
- name: PGPASSWORD
value: "64000"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "my-mp.name" . }}
key: POSTGRES_PASSWORD
configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
labels:
app.kubernetes.io/name: {{ include "my-mp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "my-mp.chart" . }}
data:
jdbc-url: jdbc:postgresql://my-postgres-postgresql-helm..
values.yaml
postgres:
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin

Is this a typo in your question about the jdbc url jdbc url: jdbc:postgresql://my-postgre? You have mentioned that the service name is my-postgres-postgresql-helm and hence the jdbc url should be something like: jdbc:postgresql://my-postgres-postgresql-helm.database. Note the .database appended to the service name! Since your application pod is running in a different namespace, you should append the namespace name at the end of the service name. Had they been in the same namespace, you wouldn't need it.
Now, if that doesn't fix it, to debug the issues, this is what I would do if I were you:
Check if there any NetworkPolicies which add restrictions on the namespace level; that is allowing traffic only between specific namespaces or even pods, which may prevent the traffic from your application pod reaching your postgres pod.
Make sure your Service for postgres pod is proper. That is, describing the service should list the Pod's IP as Endpoints. If not check the Service's label selector and make sure it uses the same labels as the postgres pod.
Exec into your pod and check if your application pod is able to reach the service through nslookup using the service name, that is my-postgres-postgresql-helm.database.
If all these tests are positive and working, then most probably it is some other configuration issue. Let me know if this fixes your issue and GL.

If I understand correctly, you have the database and the app in different namespaces and the point of namespaces is to isolate.
If you really need to access it, you can use the DNS autogenerated entry servicename.namespace.svc.cluster.local

Related

How to add protocol prefix in Kubernetes ConfigMap

In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.

kubernetes cache clear and handling

I am using Kubernetes with Helm 3.8.0, with windows docker desktop configured on WSL2.
Sometime, after running: helm install, and retrieve a container, the container that is created behind sense, is an old container that created before (even after restarting the computer).
i.e: Now the yaml is declared with password: 12345, and database: test. before I tried to run the container yaml with password: 11111, and database: my_database.
Now when I do helm install mychart ./mychart --namespace test-chart --create-namespace for the current folder chart, the container is running with password: 11111 and database: my_datatbase, instead of the new parameters provided. There is no current yaml code with the old password, so I don't understand why the docker is run with the old one.
I did several actions, such as docker system prune, restarting Windows Docker Desktop, but still I get the old container, that cannot be seen, even in Windows Docker Desktop, I have checked the option in: Settings -> Kubernetes -> Show System Containers -> Show system containers.
After some investigations, I realized that that may be because of Kubernetes has it's own garbage collection handling of containers, and that is why I may refer to old container, even I didn't mean to.
In my case, I am creating a job template (I didn't put any line that reference this job in the _helpers.tpl file - I never changed that file, and I don't know whether that may cause a problem).
Here is my job template:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myChart.fullname" . }}-migration
labels:
name: {{ include "myChart.fullname" . }}-migration
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-300"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template:
metadata:
labels:
app: {{ template "myChart.name" . }}
release: {{ .Release.Namespace }}
spec:
initContainers:
- name: wait-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
command:
- /bin/sh
- -c
- |
service mysql start &
until mysql -uroot -p12345 -e 'show databases'; do
echo `date +%H:%M:%S`' - Waiting for mysql...'
sleep 5
done
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
restartPolicy: Never
In the job - there is a database, which is first created, and after that it has data that is populated with code.
Also, are the annotations (hooks) are necessary?
After running helm install myChart ./myChart --namespace my-namespace --create-namespace, I realized that I am using very old container, which I don't really need.
I didn't understand if I write the meta data, as the following example (in: Garbage Collection) really help, and what to put in uid, whether I don't know it, or don't have it.
metadata:
...
ownerReferences:
- apiVersion: extensions/v1beta1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
Sometimes I really want to reference existing pod (or container) from several templates (use the same container, which is not stateless, such as database container - one template for the pod and the other for the job) - How can I do that, also?
Is there any command (in command line, or a kind of method) that clear all the cached in Garbage Collection, or not use Garbage Collection at all? (What are the main benefits for the GC of Kubernetes?)

Auto-scrape realm metrics from Keycloak with Prometheus-Operator

I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Helm Deployment: Connecting Kubernetes to Postgres DB in Cloud SQL

So I am deploying my spring boot app using helm. I am following a pre-existing formula used by our company to try and accomplish this task, but for some reason I am unable.
my postgresql-secrets.yml file contains the following
apiVersion: v1
kind: Secret
metadata:
name: {{ template "codes-chart.fullname" . }}-postgresql
labels:
app: {{ template "codes-chart.name" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
SPRING_DATASOURCE_URL: {{ .Values.secrets.springDatasourceUrl | b64enc }}
SPRING_DATASOURCE_USERNAME: {{ .Values.secrets.springDatasourceUsername | b64enc}}
SPRING_DATASOURCE_PASSWORD: {{ .Values.secrets.springDatasourcePassword | b64enc}}
This picks up the values in the values.yaml file
secrets:
springDatasourceUrl: PLACEHOLDER
springDatasourceUsername: PLACEHOLDER
springDatasourcePassword: PLACEHOLDER
The place holders are being overwritten in helm using a variable override in the environment.
the secrets are referenced in the envFrom: of the codes-deployment.yaml
envFrom:
- configMapRef:
name: {{ template "codes-chart.fullname" . }}-application
- secretRef:
name: {{ template "codes-chart.fullname" . }}-postgresql
my helm file structure is as follows:
|helm
|-codes
|--configmaps
|---manifest
|----manifest-codes-configmap.yaml
|--templates
|---application-deploy-job.yaml
|---application-manifest-configmap.yaml
|---application-register-job.yaml
|---application-unregister-job.yaml
|---codes-application-configmap.yaml
|---codes-deployment.yaml
|---codes-hpa.yaml
|---codes-ingress.yaml
|---codes-service.yaml
|---postgresql-secret.yaml
|--values.yaml
|--Chart.yaml
The issues seems to be with the SPRING_DATASOURCE_URL:
if i use the private ip of the cloudsql db, then it says it is not accepting connections
if i use the jdbc url format:
ex: (jdbc:postgresql://google/<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>)
then I get an 403 authentication error.
What am I doing wrong?
403 Forbidden:
The server understood the request, but is refusing to fulfill it.
The 403 for authenticated users with insufficient permissions.
403 indicates that the resource can not be provided. This may be because it is known that no level of authentication is sufficient, but it may be because the user is already authenticated and does not have authority.
Let me add some examples:
https://www.baeldung.com/kubernetes-helm
https://medium.com/zoom-techblog/from-zero-to-kubernetes-4fd354423e6a