I am getting started with helm to carry out deployments to Kubernetes and i am stuck while connecting Nodejs application with postgres DB. I am using helm to carry out the deployment to K8.
Below is my YAML file for application
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "service-chart.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "service-chart.fullname" . }}-converstionrate
template:
metadata:
labels:
app: {{ template "service-chart.fullname" . }}-converstionrate
spec:
containers:
- name: {{ template "service-chart.fullname" . }}-converstionrate
image: <application_image>
env:
- name: DB_URL
value: postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "service-chart.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "service-chart.fullname" . }}-converstionrate
ports:
- port: 8080
targetPort: 3000
Below is my requirement file where i am using the postgres dependency
dependencies:
- name: postgresql
version: "8.1.2"
repository: "https://kubernetes-charts.storage.googleapis.com/"
Below is application code where i try to connect to DB:-
if (config.use_env_variable) {
// sequelize = new Sequelize(process.env[config.use_env_variable], config);
sequelize = new Sequelize(
process.env.POSTGRES_HOST,
process.env.POSTGRES_USER,
process.env.POSTGRES_PASSWORD,
pocess.env.POSTGRES_DIALECT
);
} else {
sequelize = new Sequelize(
config.database,
config.username,
config.password,
config
);
}
What i am not able to understand is how to connect to the DB as with the above i am not able to.?
Can anyone please help me out here.?
I am newbie with helm hence not able to figure it out. I have looked into lot of blogs but some how it is not clear on how it needs to be done. As the DB is running in one POD and Node app in another so how do i wire it up together.? How to set the env variables of DB in yaml to be consumed.?
FYI --- I am using minikube to deploy as of now.
The application code is available:- at https://github.com/Vishesh30/Node-express-Postgress-helm
Thanks,
Vishesh.
As a mentioned by #David Maze, you need to fix the variable name from DB_URL to POSTGRES_HOST, but there are some other things I could see.
I've tried to reproduce you scenario and the following works for me:
You need to fix the service dns from you YAML file:
postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
to
postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
After that you need to pass to your application the host, database username and password of the database, you can do it overriding the postgresql default variables (because it's a subchart from you aplication), as describe in Helm documentaion and inject to you container using environment variables.
There's a lot of variables, see here.
Add this values to you values.yaml in order to override postgresql defaults:
postgresql:
postgresqlUsername: dbuser
postgresqlPassword: secret # just for example
You can create a secret file to store the password:
service-chart/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: dbconnection
type: Opaque
stringData:
POSTGRES_PASSWORD: {{ .Values.postgresql.postgresqlPassword }
If you don't want keep the password in file, maybe you can try some automation process to >rewrite you file before the deployment. You can read more about secrets here.
Now in you deployment.yaml add the variables, the POSTGRES_USERNAME will be get from values.yaml file, and the value to POSTGRES_PASSWORD:
env:
- name: POSTGRES_HOST
value: postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
- name: POSTGRES_USERNAME
value: {{ .Values.postgresql.postgresqlUsername }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dbconnection
key: POSTGRES_PASSWORD
After the deployment you can check the container's environment variables.
I really hope it helps you!
Related
Can environment variables passed to containers be composed from environment variables that already exist? Something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
Helm with it's variables seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:
spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
And here is one of the ways of deploying it with some custom variables:
helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
Helm allows you also to use template functions. You can have defaultfunctions and this will go to default values if they're not filled. In the example above you can see required which display the message and fails to go further with installing the chart if you won't specify the value. There is also include function that allows you to bring in another template and pass results to other template functions.
Within a single Pod spec, this works with exactly the syntax you described, but the environment variables must be defined (earlier) in the same Pod spec. See Define Dependent Environment Variables in the Kubernetes documentation.
env:
- name: HOST
value: host.example.com
- name: PORT
value: '80'
- name: URL
value: '$(HOST):$(PORT)'
Beyond this, a Kubernetes YAML file needs to be totally standalone, and you can't use environment variables on the system running kubectl to affect the file content. Other tooling like Helm fills this need better; see #thomas's answer for an example.
These manifests are complete files. Not a good way to use variables in it. Though you can.
use the below command to replace and pipe it to kubectl.
sed -i -e "s#%%HOST%%#http://whatever#" file.yml;
Though I would suggest to use Helm.
Read more:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.
I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}
I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database
Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller
I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.
I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.
Issue:
At the time of Spring Boot service start up, the following error occurs:
org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key
Where I look at the Postgres server logs, the error is recorded as
LOG: could not accept SSL connection: UNEXPECTED_RECORD
Setup:
Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.
Actions Taken:
I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:
command: ["bin/sh", "-c", "cat /tls/tls.key"]
Here is the datasource url which is fed in via an environment variable (DATASOURCE).
"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"
Here is the k8s deployment yaml, any idea where i'm going wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "service.name" . }}
labels:
release: {{ template "release.name" . }}
chart: {{ template "chart.name" . }}
chart-version: {{ template "chart.version" . }}
release: {{ template "service.fullname" . }}
spec:
replicas: {{ $.Values.image.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ template "service.name" . }}
release: {{ template "release.name" . }}
env: {{ $.Values.environment }}
spec:
imagePullSecrets:
- name: {{ $.Values.image.pullSecretsName }}
containers:
- name: {{ template "service.name" . }}
image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
# command: ["bin/sh", "-c", "cat /tls/tls.key"]
imagePullPolicy: {{ $.Values.image.pullPolicy }}
volumeMounts:
- name: tls-cert
mountPath: "/tls"
readOnly: true
ports:
- containerPort: 80
env:
- name: DATASOURCE_URL
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_URL
- name: DATASOURCE_USER
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_USER
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_PASSWORD
volumes:
- name: tls-cert
projected:
sources:
- secret:
name: postgres-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
So I figured it out, I was asking the wrong question!
Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.
To set up the proxy, you can find directions here. There is a good example of a k8s deployment file here.
One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.