Expose mulitple containerPort via values.yaml - kubernetes

Is there any way to pass array of ports in values.yaml file. I want to have multiple ContainerPorts to be set. I tried with --set "test.containerPort={8080,10102,19905} and got error message as invalid type for io.k8s.apimachinery.pkg.util.intstr.IntOrString: got "array", expected "string".
Any example/suggestions will be really helpful.

Helm uses the Go templating mechanism, so it actually takes your parameters from values.yaml and puts them into template/* files.
In other words, the way how you set multiple containers ports depends on the Helm Chart you use.
For example, if had a file template/my-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
...
spec:
template:
spec:
containers:
ports:
{{ toYaml .Values.ports| indent 10 }}
...
Then, you could use the following values.yaml to set multiple container ports.
ports:
- name: my first port
containerPort: 5678
- name: my second port
containerPort: 5679

Related

Different name required to override value in Helm subchart

I have read the Helm docs and various StackOverflow questions - this is not (I hope!) a lazy question. I'm having an issue overriding a single particular value in a Helm chart, not having trouble with the concept in general.
I'm trying to install the Gitea helm chart on a k8s cluster on Raspberry Pis (that is - on arm64 architecture). Since the default memcached dependency chart is from Bitnami, who don't support arm64, I have overridden the image appropriately (to arm64v8/memcached, link).
However, this new image has a different entrypoint - /entrypoint.sh instead of /run.sh. Referencing the relevant part of the template, I believed I needed to override memcached.args, but that didn't work as expected:
$ cat values.yaml
memcached:
image:
repository: "arm64v8/memcached"
tag: "1.6.17"
args:
- "/entrypoint.sh"
diagnosticMode:
enabled: false
$ helm template gitea-charts/gitea --values values.yaml
[...]
# Source: gitea/charts/memcached/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-memcached
namespace: gitea
labels: [...]
spec:
selector:
matchLabels: [...]
replicas: 1
template:
metadata:
labels: [...]
spec:
[...]
serviceAccountName: release-name-memcached
containers:
- name: memcached
image: docker.io/arm64v8/memcached:1.6.17
imagePullPolicy: "IfNotPresent"
args:
- /run.sh # <----- this should be `/entrypoint.sh`
env:
- name: BITNAMI_DEBUG
value: "false"
ports:
- name: memcache
containerPort: 11211
[...]
However, when I instead overrode memcached.arguments, the expected behaviour occurred - the contents of memcached.arguments rendered in the template's args (or, if memcached.arguments was empty, no args were rendered)
Where is this mapping from arguments to args taking place?
Note in particular that the Bitnami chart docs refer to args, so this is unexpected - though note also that the Bitnami chart's values.yaml refers to arguments in the comment (this is what prompted me to try this "obviously wrong" approach!). In the "Upgrade to 5.0.0 notes", we see "arguments has been renamed to args." - but the Gitea chart is using a >5.0.0 version of the Bitnami chart.
You're reasoning is correct. And the current parameter name is definitely called args (arguments is deprecated, someone just forgot to update the comment here).
Now, why arguments work for you and args? I think you're just using the old version, before it was renamed. I checked it and:
Gitea chart uses version 5.9.0 from the repo https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
This corresponds to the following Helm Chart: https://charts.bitnami.com/bitnami/memcached-5.9.0.tgz (you can check it here).
When you extract this file chart, you see it's the old version of chart (with arguments not yet renamed to args).

Can a deploy with multiple ReplicaSets run CMD different command?

I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.

kubernetes cache clear and handling

I am using Kubernetes with Helm 3.8.0, with windows docker desktop configured on WSL2.
Sometime, after running: helm install, and retrieve a container, the container that is created behind sense, is an old container that created before (even after restarting the computer).
i.e: Now the yaml is declared with password: 12345, and database: test. before I tried to run the container yaml with password: 11111, and database: my_database.
Now when I do helm install mychart ./mychart --namespace test-chart --create-namespace for the current folder chart, the container is running with password: 11111 and database: my_datatbase, instead of the new parameters provided. There is no current yaml code with the old password, so I don't understand why the docker is run with the old one.
I did several actions, such as docker system prune, restarting Windows Docker Desktop, but still I get the old container, that cannot be seen, even in Windows Docker Desktop, I have checked the option in: Settings -> Kubernetes -> Show System Containers -> Show system containers.
After some investigations, I realized that that may be because of Kubernetes has it's own garbage collection handling of containers, and that is why I may refer to old container, even I didn't mean to.
In my case, I am creating a job template (I didn't put any line that reference this job in the _helpers.tpl file - I never changed that file, and I don't know whether that may cause a problem).
Here is my job template:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myChart.fullname" . }}-migration
labels:
name: {{ include "myChart.fullname" . }}-migration
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-300"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template:
metadata:
labels:
app: {{ template "myChart.name" . }}
release: {{ .Release.Namespace }}
spec:
initContainers:
- name: wait-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
command:
- /bin/sh
- -c
- |
service mysql start &
until mysql -uroot -p12345 -e 'show databases'; do
echo `date +%H:%M:%S`' - Waiting for mysql...'
sleep 5
done
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
restartPolicy: Never
In the job - there is a database, which is first created, and after that it has data that is populated with code.
Also, are the annotations (hooks) are necessary?
After running helm install myChart ./myChart --namespace my-namespace --create-namespace, I realized that I am using very old container, which I don't really need.
I didn't understand if I write the meta data, as the following example (in: Garbage Collection) really help, and what to put in uid, whether I don't know it, or don't have it.
metadata:
...
ownerReferences:
- apiVersion: extensions/v1beta1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
Sometimes I really want to reference existing pod (or container) from several templates (use the same container, which is not stateless, such as database container - one template for the pod and the other for the job) - How can I do that, also?
Is there any command (in command line, or a kind of method) that clear all the cached in Garbage Collection, or not use Garbage Collection at all? (What are the main benefits for the GC of Kubernetes?)

How to use k8s Ansible module without quotes?

I am trying to use the module community.kubernetes.k8s – Manage Kubernetes (K8s) objects with variables from the role (e.g. role/sampleRole/vars file).
I am failing when it comes to the integer point e.g.:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
When I deploy with this format obviously it will fail at it can not parse the "reference" to the var.
Sample of error:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found unacceptable key (unhashable type: 'AnsibleMapping')
The error appears to be in 'deploy.yml': line <some line>, column <some column>, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
ports:
- containerPort: {{ containerPort }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
When I use quotes on the variable e.g. - containerPort: "{{ containerPort }}" then I get the following error (part of it):
v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Ports: []v1.ContainerPort: v1.ContainerPort.ContainerPort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|nerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"d|..., bigger context ...|\\\\\",\\\\\"name\\\\\":\\\\\"samplegreen\\\\\",\\\\\"ports\\\\\":[{\\\\\"containerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"dnsPolicy\\\\\":\\\\\"ClusterFirst\\\\\",\\\\\"restartPolicy\\\\\"|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
I tried to cast the string to int by using - containerPort: "{{ containerPort | int }}" but it did not worked. The problem seems to be coming from the quotes, independently how I define the var in my var file e.g. containerPort: 80 or containerPort: "80".
I found a similar question on the forum Ansible, k8s and variables but the user seems not to have the same problems that I am having.
I am running with the latest version of the module:
$ python3 -m pip show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: ruamel.yaml, python-string-utils, jinja2, six, kubernetes
Is there any workaround this problem or is it a bug?
Update (08-01-2020): The problem is fixed on version 0.17.0.
$ python3 -m pip show k8s
Name: k8s
Version: 0.17.0
Summary: Python client library for the Kubernetes API
Home-page: https://github.com/fiaas/k8s
Author: FiaaS developers
Author-email: fiaas#googlegroups.com
License: Apache License
Location: /usr/local/lib/python3.8/dist-packages
Requires: requests, pyrfc3339, six, cachetools
You could try the following as a workaround; in this example, we're creating a text template, and then using the from_yaml filter to transform this into our desired data structure:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec: "{{ spec|from_yaml }}"
vars:
spec: |
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
The solution provided by larsks works perfectly. Although I got another problem on my case where I use templates with a bit more complex cases (e.g. loops etc) where I found my self having the same problem.
The only solution that I had before was to use ansible.builtin.template – Template a file out to a remote server and simply ssh the some_file.yml.j2 to one of my Master nodes and deploy through ansible.builtin.shell – Execute shell commands on targets (e.g. kubectl apply -f some_file.yml).
Thanks to community.kubernetes.k8s – Manage Kubernetes (K8s) objects I am able to do all this work with a single task e.g. (example taken from documentation):
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
The only requirement that the user needs to have in advance is to have the kubeconfig file placed in the default location (~/.kube/config) or use the kubeconfig flag to point to the location of the file.
As a last step I use it delegate_to to localhost command e.g.
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
delegate_to: localhost
The way that this task works is that the user ssh to himself and run kubectl apply -f some_file.yml.j2 towards the LB or Master node API and the API applies the request (if the user has the permissions).

How to connect postgresql from app using helm and kubernetes?

I am really struggling regarding how my application which is deployed in --dev namespace can connect to postgreSQL database which I deployed independently using helm with --database namespace. What I did so far is as below.
Database and myapp deployed different namespace. I just copy the name PGHOST,PGPASSWORD from some examples but I am not sure where should I use this name and is that has to be same somewhere in postgreSQL?
Should I take care anything else to connect database or is there anything that is not best practice? Should I add a namespace to jdbc url?
Locally we connect to database using below parameters but what should be the way after we deploy our application via helm? We are using sequelize as a client library
const connectionString = postgres://${global.config.database_username}:${global.config.database_password}#${global.config.database_host}:${global.config.database_port}/${global.config.database_name};
postgres values
## Specify PGDATABASE
##
DBName: db
After I deployed postgres;
# of replicas: 3
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
jdbc url: jdbc:postgresql://my-postgres-postgresql-helm:port
deployment.yaml
- name: PGHOST
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-configmap
key: jdbc-url
- name: PGDATABASE
value: {{ .Values.postgres.database name | quote }}
- name: PGPASSWORD
value: "64000"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "my-mp.name" . }}
key: POSTGRES_PASSWORD
configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
labels:
app.kubernetes.io/name: {{ include "my-mp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "my-mp.chart" . }}
data:
jdbc-url: jdbc:postgresql://my-postgres-postgresql-helm..
values.yaml
postgres:
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
Is this a typo in your question about the jdbc url jdbc url: jdbc:postgresql://my-postgre? You have mentioned that the service name is my-postgres-postgresql-helm and hence the jdbc url should be something like: jdbc:postgresql://my-postgres-postgresql-helm.database. Note the .database appended to the service name! Since your application pod is running in a different namespace, you should append the namespace name at the end of the service name. Had they been in the same namespace, you wouldn't need it.
Now, if that doesn't fix it, to debug the issues, this is what I would do if I were you:
Check if there any NetworkPolicies which add restrictions on the namespace level; that is allowing traffic only between specific namespaces or even pods, which may prevent the traffic from your application pod reaching your postgres pod.
Make sure your Service for postgres pod is proper. That is, describing the service should list the Pod's IP as Endpoints. If not check the Service's label selector and make sure it uses the same labels as the postgres pod.
Exec into your pod and check if your application pod is able to reach the service through nslookup using the service name, that is my-postgres-postgresql-helm.database.
If all these tests are positive and working, then most probably it is some other configuration issue. Let me know if this fixes your issue and GL.
If I understand correctly, you have the database and the app in different namespaces and the point of namespaces is to isolate.
If you really need to access it, you can use the DNS autogenerated entry servicename.namespace.svc.cluster.local