I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}
Related
Can environment variables passed to containers be composed from environment variables that already exist? Something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
Helm with it's variables seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:
spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
And here is one of the ways of deploying it with some custom variables:
helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
Helm allows you also to use template functions. You can have defaultfunctions and this will go to default values if they're not filled. In the example above you can see required which display the message and fails to go further with installing the chart if you won't specify the value. There is also include function that allows you to bring in another template and pass results to other template functions.
Within a single Pod spec, this works with exactly the syntax you described, but the environment variables must be defined (earlier) in the same Pod spec. See Define Dependent Environment Variables in the Kubernetes documentation.
env:
- name: HOST
value: host.example.com
- name: PORT
value: '80'
- name: URL
value: '$(HOST):$(PORT)'
Beyond this, a Kubernetes YAML file needs to be totally standalone, and you can't use environment variables on the system running kubectl to affect the file content. Other tooling like Helm fills this need better; see #thomas's answer for an example.
These manifests are complete files. Not a good way to use variables in it. Though you can.
use the below command to replace and pipe it to kubectl.
sed -i -e "s#%%HOST%%#http://whatever#" file.yml;
Though I would suggest to use Helm.
Read more:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
I am attempting to create a MongoDB through the gitlab-ci using helm (a DB similar to the postgres offering by Gitlab). Using only helm charts, I am trying to enable the service. [I am deploying on my own cluster but eventually it will be on a cluster that I do not have direct kubectl rights to.]
I am adapting the setup based on this video walkthrough.
I am receiving this error:
wait.go:53: [debug] beginning wait for 7 resources with timeout of 5m0s
wait.go:216: [debug] PersistentVolumeClaim is not bound: angular-23641052-review-86-sh-mousur/review-86-sh-mousur-mongodb
This is the application.yml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ template "chart.fullname" .}}-deployment
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "chart.fullname" .}}-example
template:
metadata:
labels:
app: {{ template "chart.fullname" .}}-example
spec:
containers:
- name: example
image: angular-example:1.0.1
env:
- name: MONGO_URL
value: mongodb://{{ template "mongodb.fullname" . }}.default.svc.cluster.local:27017/{{ .Values.DbName }}
Currently, this is for testing - and the database is very small based on a few JSONs.
How can I move to use dynamic storage for the interim using only helm charts i.e. without access to kubectl?
I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
I am getting started with helm to carry out deployments to Kubernetes and i am stuck while connecting Nodejs application with postgres DB. I am using helm to carry out the deployment to K8.
Below is my YAML file for application
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "service-chart.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "service-chart.fullname" . }}-converstionrate
template:
metadata:
labels:
app: {{ template "service-chart.fullname" . }}-converstionrate
spec:
containers:
- name: {{ template "service-chart.fullname" . }}-converstionrate
image: <application_image>
env:
- name: DB_URL
value: postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "service-chart.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "service-chart.fullname" . }}-converstionrate
ports:
- port: 8080
targetPort: 3000
Below is my requirement file where i am using the postgres dependency
dependencies:
- name: postgresql
version: "8.1.2"
repository: "https://kubernetes-charts.storage.googleapis.com/"
Below is application code where i try to connect to DB:-
if (config.use_env_variable) {
// sequelize = new Sequelize(process.env[config.use_env_variable], config);
sequelize = new Sequelize(
process.env.POSTGRES_HOST,
process.env.POSTGRES_USER,
process.env.POSTGRES_PASSWORD,
pocess.env.POSTGRES_DIALECT
);
} else {
sequelize = new Sequelize(
config.database,
config.username,
config.password,
config
);
}
What i am not able to understand is how to connect to the DB as with the above i am not able to.?
Can anyone please help me out here.?
I am newbie with helm hence not able to figure it out. I have looked into lot of blogs but some how it is not clear on how it needs to be done. As the DB is running in one POD and Node app in another so how do i wire it up together.? How to set the env variables of DB in yaml to be consumed.?
FYI --- I am using minikube to deploy as of now.
The application code is available:- at https://github.com/Vishesh30/Node-express-Postgress-helm
Thanks,
Vishesh.
As a mentioned by #David Maze, you need to fix the variable name from DB_URL to POSTGRES_HOST, but there are some other things I could see.
I've tried to reproduce you scenario and the following works for me:
You need to fix the service dns from you YAML file:
postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
to
postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
After that you need to pass to your application the host, database username and password of the database, you can do it overriding the postgresql default variables (because it's a subchart from you aplication), as describe in Helm documentaion and inject to you container using environment variables.
There's a lot of variables, see here.
Add this values to you values.yaml in order to override postgresql defaults:
postgresql:
postgresqlUsername: dbuser
postgresqlPassword: secret # just for example
You can create a secret file to store the password:
service-chart/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: dbconnection
type: Opaque
stringData:
POSTGRES_PASSWORD: {{ .Values.postgresql.postgresqlPassword }
If you don't want keep the password in file, maybe you can try some automation process to >rewrite you file before the deployment. You can read more about secrets here.
Now in you deployment.yaml add the variables, the POSTGRES_USERNAME will be get from values.yaml file, and the value to POSTGRES_PASSWORD:
env:
- name: POSTGRES_HOST
value: postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
- name: POSTGRES_USERNAME
value: {{ .Values.postgresql.postgresqlUsername }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dbconnection
key: POSTGRES_PASSWORD
After the deployment you can check the container's environment variables.
I really hope it helps you!
I'm deploying a Kubernetes stateful set and I would like to get the pod index inside the helm chart so I can configure each pod with this pod index.
For example in the following template I'm using the variable {{ .Values.podIndex }} to retrieve the pod index in order to use it to configure my app.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: Always
name: {{ .Values.name }}
command: ["launch"],
args: ["-l","{{ .Values.podIndex }}"]
ports:
- containerPort: 4000
imagePullSecrets:
- name: gitlab-registry
You can't do this in the way you're describing.
Probably the best path is to change your Deployment into a StatefulSet. Each pod launched from a StatefulSet has an identity, and each pod's hostname gets set to the name of the StatefulSet plus an index. If your launch command looks at hostname, it will see something like name-0 and know that it's the first (index 0) pod in the StatefulSet.
A second path would be to create n single-replica Deployments using Go templating. This wouldn't be my preferred path, but you can
{{ range $podIndex := until .Values.replicaCount -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Values.name }}-{{ $podIndex }}
spec:
replicas: 1
template:
spec:
containers:
- name: {{ .Values.name }}
command: ["launch"]
args: ["-l", "{{ $podIndex }}"]
{{ end -}}
The actual flow here is that Helm reads in all of the template files and produces a block of YAML files, then submits these to the Kubernetes API server (with no templating directives at all), and the Kubernetes machinery acts on it. You can see what's being submitted by running helm template. By the time a Deployment is creating a Pod, all of the template directives have been stripped out; you can't make fields in the pod spec dependent on things like which replica it is or which node it got scheduled on.