Helm Deployment: Connecting Kubernetes to Postgres DB in Cloud SQL - postgresql

So I am deploying my spring boot app using helm. I am following a pre-existing formula used by our company to try and accomplish this task, but for some reason I am unable.
my postgresql-secrets.yml file contains the following
apiVersion: v1
kind: Secret
metadata:
name: {{ template "codes-chart.fullname" . }}-postgresql
labels:
app: {{ template "codes-chart.name" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
SPRING_DATASOURCE_URL: {{ .Values.secrets.springDatasourceUrl | b64enc }}
SPRING_DATASOURCE_USERNAME: {{ .Values.secrets.springDatasourceUsername | b64enc}}
SPRING_DATASOURCE_PASSWORD: {{ .Values.secrets.springDatasourcePassword | b64enc}}
This picks up the values in the values.yaml file
secrets:
springDatasourceUrl: PLACEHOLDER
springDatasourceUsername: PLACEHOLDER
springDatasourcePassword: PLACEHOLDER
The place holders are being overwritten in helm using a variable override in the environment.
the secrets are referenced in the envFrom: of the codes-deployment.yaml
envFrom:
- configMapRef:
name: {{ template "codes-chart.fullname" . }}-application
- secretRef:
name: {{ template "codes-chart.fullname" . }}-postgresql
my helm file structure is as follows:
|helm
|-codes
|--configmaps
|---manifest
|----manifest-codes-configmap.yaml
|--templates
|---application-deploy-job.yaml
|---application-manifest-configmap.yaml
|---application-register-job.yaml
|---application-unregister-job.yaml
|---codes-application-configmap.yaml
|---codes-deployment.yaml
|---codes-hpa.yaml
|---codes-ingress.yaml
|---codes-service.yaml
|---postgresql-secret.yaml
|--values.yaml
|--Chart.yaml
The issues seems to be with the SPRING_DATASOURCE_URL:
if i use the private ip of the cloudsql db, then it says it is not accepting connections
if i use the jdbc url format:
ex: (jdbc:postgresql://google/<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>)
then I get an 403 authentication error.
What am I doing wrong?

403 Forbidden:
The server understood the request, but is refusing to fulfill it.
The 403 for authenticated users with insufficient permissions.
403 indicates that the resource can not be provided. This may be because it is known that no level of authentication is sufficient, but it may be because the user is already authenticated and does not have authority.
Let me add some examples:
https://www.baeldung.com/kubernetes-helm
https://medium.com/zoom-techblog/from-zero-to-kubernetes-4fd354423e6a

Related

Using SecretProviderClass with Ingress basic Auth

I'm trying to setup Basic auth in ingress. The "nginx.ingress.kubernetes.io/auth-secret" I have stored in K8s secrets using SecretProviderClass. The secret is mounted correctly. As per this documentation (https://kubernetes.github.io/ingress-nginx/examples/auth/basic/), the secret should have "data.auth" inside the key. Hence, in my deployment file I created an environment variable named "BASIC_AUTH_VALUE" to achieve this.
env:
- name: SECRET_AUTH
valueFrom:
secretKeyRef:
name: {{ include "ui.fullname" . }}-azure-csi
key: FRONTEND_BASIC_AUTH
optional: false
- name: BASIC_AUTH_VALUE
value: data.auth:$(SECRET_AUTH)
Then in my ingress file, I set the annotations as below
nginx.ingress.kubernetes.io/auth-secret: BASIC_AUTH_VALUE
Even then I still get 503 error. The pod is up and running and there isn't anything in the logs that I can find.
I have tried several options but all in vain so far. Any guidance will be of great help. Thanks.
I found a solution. I had to adapt the SecretProviderClass's secretObjects as below
secretObjects:
- data:
{{- range $secret := .Values.azureSecretsCSI.secrets }}
- key: {{ $secret.k8sName }}
objectName: {{ $secret.azName }}
{{- end }}
secretName: {{ include "ui.fullname" . }}-auth-azure-csi
type: Opaque
Where "{{ $secret.k8sName }}" must be "auth" is derived from values.yaml file as below
azureSecretsCSI:
tenantId: XXX
kvName: XXX
secrets:
- azName: XXX
k8sName: auth
And then in ingress annotations add name of the secret provider class instead of a secret name or an environment variable (which I was trying to do and which wasn't working)
nginx.ingress.kubernetes.io/auth-secret: {{ include "ui.fullname" . }}-auth-azure-csi

How to connect postgresql from app using helm and kubernetes?

I am really struggling regarding how my application which is deployed in --dev namespace can connect to postgreSQL database which I deployed independently using helm with --database namespace. What I did so far is as below.
Database and myapp deployed different namespace. I just copy the name PGHOST,PGPASSWORD from some examples but I am not sure where should I use this name and is that has to be same somewhere in postgreSQL?
Should I take care anything else to connect database or is there anything that is not best practice? Should I add a namespace to jdbc url?
Locally we connect to database using below parameters but what should be the way after we deploy our application via helm? We are using sequelize as a client library
const connectionString = postgres://${global.config.database_username}:${global.config.database_password}#${global.config.database_host}:${global.config.database_port}/${global.config.database_name};
postgres values
## Specify PGDATABASE
##
DBName: db
After I deployed postgres;
# of replicas: 3
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
jdbc url: jdbc:postgresql://my-postgres-postgresql-helm:port
deployment.yaml
- name: PGHOST
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-configmap
key: jdbc-url
- name: PGDATABASE
value: {{ .Values.postgres.database name | quote }}
- name: PGPASSWORD
value: "64000"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "my-mp.name" . }}
key: POSTGRES_PASSWORD
configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
labels:
app.kubernetes.io/name: {{ include "my-mp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "my-mp.chart" . }}
data:
jdbc-url: jdbc:postgresql://my-postgres-postgresql-helm..
values.yaml
postgres:
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
Is this a typo in your question about the jdbc url jdbc url: jdbc:postgresql://my-postgre? You have mentioned that the service name is my-postgres-postgresql-helm and hence the jdbc url should be something like: jdbc:postgresql://my-postgres-postgresql-helm.database. Note the .database appended to the service name! Since your application pod is running in a different namespace, you should append the namespace name at the end of the service name. Had they been in the same namespace, you wouldn't need it.
Now, if that doesn't fix it, to debug the issues, this is what I would do if I were you:
Check if there any NetworkPolicies which add restrictions on the namespace level; that is allowing traffic only between specific namespaces or even pods, which may prevent the traffic from your application pod reaching your postgres pod.
Make sure your Service for postgres pod is proper. That is, describing the service should list the Pod's IP as Endpoints. If not check the Service's label selector and make sure it uses the same labels as the postgres pod.
Exec into your pod and check if your application pod is able to reach the service through nslookup using the service name, that is my-postgres-postgresql-helm.database.
If all these tests are positive and working, then most probably it is some other configuration issue. Let me know if this fixes your issue and GL.
If I understand correctly, you have the database and the app in different namespaces and the point of namespaces is to isolate.
If you really need to access it, you can use the DNS autogenerated entry servicename.namespace.svc.cluster.local

Recommended way to add features to a 3rd party helm chart?

currently we're adding features to 3rd party helm charts we're deploying (for example - in prometheus we're adding an authentication support as we use nginx ingress controller).
Obviously, this will cause us headaches when we want to upgrade those helm charts, we will need to perform "diffs" with our changes.
What's the recommended way to add functionality to existing 3rd party helm charts? Should i use umbrella charts and use prometheus as a dependency? then import value from the chart? (https://github.com/helm/helm/blob/master/docs/charts.md#importing-child-values-via-requirementsyaml)
Or any other recommended way?
-- EDIT --
Example - as you can see, i've added 3 nginx.ingress.* annotations to support basic auth on prometheus ingress resource - of course if i'll upgrade, i'll need to manually add them again, which will cause problems
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
{{- if .Values.prometheus.ingress.annotations }}
annotations:
{{ toYaml .Values.prometheus.ingress.annotations | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.ingress.nginxBasicAuthEnabled }}
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
nginx.ingress.kubernetes.io/auth-secret: {{ template "prometheus-operator.fullname" . }}-prometheus-basicauth
nginx.ingress.kubernetes.io/auth-type: "basic"
{{- end }}
name: {{ $serviceName }}
labels:
app: {{ template "prometheus-operator.name" . }}-prometheus
{{ include "prometheus-operator.labels" . | indent 4 }}
{{- if .Values.prometheus.ingress.labels }}
{{ toYaml .Values.prometheus.ingress.labels | indent 4 }}
{{- end }}
spec:
rules:
{{- range $host := .Values.prometheus.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: "{{ $routePrefix }}"
backend:
serviceName: {{ $serviceName }}
servicePort: 9090
{{- end }}
{{- if .Values.prometheus.ingress.tls }}
tls:
{{ toYaml .Values.prometheus.ingress.tls | indent 4 }}
{{- end }}
{{- end }}
I think that might answer your question.
Subcharts and Globals
Requirements
Helm Dependencies
This led me to find the specific part I was looking for, where the parent chart can override sub-charts by specifying the chart name as a key in the parent values.yaml.
In the application chart's requirements.yaml:
dependencies:
- name: jenkins
# Can be found with "helm search jenkins"
version: '0.18.0'
# This is the binaries repository, as documented in the GitHub repo
repository: 'https://kubernetes-charts.storage.googleapis.com/'
Run:
helm dependency update
In the application chart's values.yaml:
# ...other normal config values
# Name matches the sub-chart
jenkins:
# This will be override "someJenkinsConfig" in the "jenkins" sub-chart
someJenkinsConfig: value
I would either fork and handle integrating the changes when you upgrade/rebase, or if possible disable the ingress elements for those you want to customise via the values.yaml file. Then create your own ingress instances manually with the customisations you need in another custom chart, and provide it the references it needs from the prometheus chart as normal values.yaml inputs.
Obviously this approach has it's limitations, if the customisations are too tightly coupled to the chart it might not be possible to split them out.
Hope this helps.

Conditionally deploying a secret based on --set parameter

I have a Helm chart that I am deploying to Azure Kubernetes Service, and minikube for development purposes.
When deploying to minikube, I need to add a secret so the cluster can speak with my Azure Container Registry. This is not necessary when I'm deploying to AKS.
Is there any way I can specify whether or not to include the secret through a --set value with helm install, or do I have to set up different helm charts?
You can put anything you want inside a Go text/template conditional block, even whole Kubernetes resources.
# templates/some-secret.yaml
{{ if .Values.theSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "some.name" . }}-some-secret
labels:
{{ template "some.labels" . | indent 4 }}
data:
theSecret: {{ .Values.theSecret | b64enc }}
{{ end }}
Or, if you already have some shared Secret, you can make individual values conditional
data:
someValue: {{ .Values.someValue | b64enc }}
{{- if .Values.theSecret }}
theSecret: {{ .Values.theSecret | b64enc }}
{{- end }}
As the chart author you need to write this into the chart. If you're using a third-party chart, it's up to the chart author to provide this functionality.

How to pull environment variables with Helm charts

I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.