How to add extra hosts entries in helm charts - kubernetes

So i'm deploying my application stack on kubernetes sing helm charts and now i need to add some dependant server ip's and hostnames inside my pods /etc/hosts file so need help on this scenario

A helm templated solution to the original question. I tested this with helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the hostAliases section is omitted when the template is rendered.

As standing in documentation you can add extra hosts to POD by using host aliases feature
Example from docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"

Kubernetes provides a DNS service that all pods get to use. In turn, you can define an ExternalName service that just defines a DNS record. Once you do that, your pods can talk to that service the same way they'd talk to any other Kubernetes service, and reach whatever server.
You could deploy a set of ExternalName services globally. You could do it in a Helm chart too, if you wanted, something like
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-foo
spec:
type: ExternalName
externalName: {{ .Values.fooHostname }}
The practice I've learned is that you should avoid using /etc/hosts if at all possible.

Related

How to use Argo CD to pass environment variables to deploy in kubernetes

We are using argo cd and kubernetes.
And I want to use environmental variables in the yaml file.
For example,
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: $PORT
targetPort: $TARGET_PORT
selector:
app: guestbook-ui
I want to set the value of the environmental variable (PORT and TARGET_PORT) when deploying it to Argo CD.
What should I do?
I'd recommend converting your raw YAML to a Helm chart and templating the relevant fields.
Argo CD has an example Helm app with a service similar to yours.
You could define a service like this:
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
selector:
app: guestbook-ui
And then define your port and targetPort parameters in Argo CD.

How to configure kube-prometheus-stack helm installation to scrape a Kubernetes service?

I have installed kube-prometheus-stack as a dependency in my helm chart on a local docker for Mac Kubernetes cluster v1.19.7. I can view the default prometheus targets provided by the kube-prometheus-stack.
I have a python flask service that provides metrics which I can view successfully in the kubernetes cluster using kubectl port forward.
However, I am unable to get these metrics displayed on the prometheus targets web interface.
The kube-prometheus-stack documentation states that Prometheus.io/scrape does not support annotation-based discovery of services. Instead the the reader is referred to the concept of ServiceMonitors and PodMonitors.
So, I have configured my service as follows:
---
kind: Service
apiVersion: v1
metadata:
name: flask-api-service
labels:
app: flask-api-service
spec:
ports:
- protocol: TCP
port: 4444
targetPort: 4444
name: web
selector:
app: flask-api-service
tier: backend
type: ClusterIP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web
Subsequently, I have setup a port forward to view the metrics:
Kubectl port-forward prometheus-flaskapi-kube-prometheus-s-prometheus-0 9090
Then visited prometheus web page at http://localhost:9090
When I select the Status->Targets menu option, my flask-api-service is not displayed.
I know that the service is up and running and I have checked that I can view the metrics for a pod for my flask-api-service using kubectl port-forward <pod name> 4444.
Looking at a similar issue it looks as though there is a configuration value serviceMonitorSelectorNilUsesHelmValues that defaults to true. Setting this to false makes the operator look outside it’s release labels in helm??
I tried adding this to the values.yml of my helm chart in addition to the extraScrapeConfigs configuration value. However, the flask-api-service still does not appear as an additional target on the prometheus web page when clicking the Status->Targets menu option.
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
How do I get my flask-api-service recognised on the prometheus targets page at http://localhost:9090?
I am installing Kube-Prometheus-Stack as a dependency via my helm chart with default values as shown below:
Chart.yaml
apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
version: "14.4.0"
repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
version: "3.25.0"
repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
version: "12.9.0"
repository: "https://charts.bitnami.com/bitnami"
Values.yaml
docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local
redis_port: "6379"
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
Prometheus custom resource definition has a field called serviceMonitorSelector. Prometheus only listens to those matched serviceMonitor. In case of helm deployment it is your release name.
release: {{ $.Release.Name | quote }}
So adding this field in your serviceMonitor should solve the issue. Then you serviceMonitor manifest file will be:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
labels:
release: <your_helm_realese_name_>
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web

How to expose a range of ports in Kubernetes?

How can we expose a range of ports in Kubernetes?
My cloud application is using a range of ports when running(40000 ~42000).
How do I specify a range of exposed ports in Kubernetes service yaml file?
Kubernetes services currently do not support port ranges, see https://github.com/kubernetes/kubernetes/issues/23864
Update: As of 2021 there is a kubernetes enhancement proposal field for this requirement: https://github.com/kubernetes/enhancements/pull/2611
One option is to use go-templates to generate the yaml for all of the ports, then deploy that yaml, then use gomplate to compile the yaml
lotsofports.yaml
apiVersion: v1
kind: Service
metadata:
namespace: some-service
labels:
name: some-service
spec:
ports:
{{- range seq 0 2000}}
- port: {{add 40000 .}}
targetPort: {{add 40000 .}}
name: exposed-port-{{add 40000 .}}
{{- end -}}
selector:
name: some-service
sessionAffinity: None
type: ClusterIP
Then compile the yaml:
gomplate -f lotsofports.yaml > service.yaml
As #Thomas pointed is not supported yet.
However, as workaround you can try to use Helm templates.
Create chart with service template and ports in values.yaml file.

How to create a helm chart where one template relies on another

I'm trying to create a kubernetes chart which creates an nfs based on the example given here:
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
The problem with this is that it requires that we create a service, and then we create a persistent volume which references the cluster ip of the service (which I won't know until the service has been deployed.
I was initially thinking that I could use a template in someway to call kubectl to query for the cluster ip, but as far as I can tell, you can't run a CLI from within helm templates?
If this is the case, I'm really struggling to see the usefulness of helm, as lots of setups would require creating one resource, and then referencing a dynamic property of that from a different resource? I know I could solve this by splitting the chart in two, but my understanding of helm is that a chart should contain everything required to deploy a functional part of your app?
Here is the relevant snippet from my template:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.prefix }}-{{ .Values.appName }}-nfs
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: {{ .Values.prefix }}-{{ .Values.appName }}-nfs
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.prefix }}-{{ .Values.appName }}-nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: << nfs.clusterip >>
path: "/"
NOTE: the << nfs.clusterip >> field at the end of the persistent volume.

helm with ingress controllers

We have an application that get created using helm. Every time we do release it creates a service with release name in it. How do we handle in alb-ingress if the service keeps changing ?
ex: for alb ingress(under kops) I have below rule
- host: pluto.example.com
paths:
- path: /
backend:
serviceName: pluto-service
servicePort: 8080
With a different helm release pluto-service will have new name. How to handle the ingress ?
You can also try to use '--reuse-values' flag with helm upgrade command. This will reuse the last release`s values.
Is the ingress declared with helm too ?
If so, and if the service use a {{ .Release.Name }}-service as name, you can also use {{ .Release.Name }}-service as ingress' service name. You can also write you own tpl function (and add it to _helpers.tpl file) to determine service name.
If not, maybe you should ...
You can create a service in helm where you pass a different value to the name of the service, most likely you use a release name right now. For example, create a helm chart for your application where you pass the name as a value:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nameOverride }}
spec:
type: NodePort
ports:
- name: http-service
targetPort: 5000
protocol: TCP
port: 80
selector:
app: <MyApp>
And in the values.yaml of the chart you can specify the name of your service: nameOverride: MyService