What field should I add to the service / ingress yaml so that I can reach the service from another pod in the same cluster using its associated (external) hostname specified in ingress?
I'm using microk8s with the default ingress class (nginx), and I need a solution that works in any kubernetes platform (azure, gke, aks)
I need to reach my authentication server (keycloak) from my nodejs application, using ingress hostname. I can't use service name, because the token validation would fail (JWT ISS checking).
thanks!
Based on this SO post this can be done using Helm custom values and hostAliases.
A helm templated solution to the original question. I tested this with
helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the
hostAliases section is omitted when the template is rendered.
Related
I am building a monitoring environment with data dock in GKE environment.
I have configured datadog helm-chart with logs set to enabled. At this time, doesn't the Datadog agent automatically get the logs of all pods on the Node?
As shown below, it seems that logs are being collected only when the annotation must be attached to the Pods you want to collect logs from
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
tags.datadoghq.com/env: "test-env"
tags.datadoghq.com/service: "test-service"
tags.datadoghq.com/version: 0.0.1
spec:
....
In this case, what if you want to automatically get the logs of all Pods floating in the cluster?
I tried allowing datadog agent to work and show logs dashboard in datadog web
try to enable in datadog values file under logs part-
logs
I'm new to Kubernetes and Helm and want to create a SSO with OIDC using vouch-proxy
I found a tutorial which explains how to do it and was able to write some helmfiles that were accepted by kubernetes.
I added the ingress configuration to the values.yaml that I load in my helmfile.yaml.
helmfile.yaml
bases:
- environments.yaml
---
releases:
- name: "vouch"
chart: "halkeye/vouch"
version: {{ .Environment.Values.version }}
namespace: {{ .Environment.Values.namespace }}
values:
- values.yaml
values.yaml
# vouch config
# bare minimum to get vouch running with OpenID Connect (such as okta)
config:
vouch:
some:
other:
values:
# important part
ingress:
enabled: true
hosts:
- "vouch.minikube"
paths:
- /
With this configuration helmfile creates an Ingress for the correct host, but when I open the URL in my Browser it returns a 404 Not Found which makes sense, since I didn't specify the correct port (9090).
I tried some notations to add the port but it lead to either helmfile not updating the pod or 500 Internal Server Errors.
How can I add a port in the configuration? And is it the "correct" way to do it? Or should ingresses be handled by kubectl still?
i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}
I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/
I'm looking to configure Redis for Sidekiq and Rails in k8s. Using Google Cloud Memory Store with an IP address.
I have a helm template like the following (with gcpRedisMemorystore specified separately) - My question is what does the Service object add to the system? Is it necessary or does the Endpoint provide all the needed access?
charts/app/templates/app-memorystore.service.yaml
kind: Service
apiVersion: v1
metadata:
name: app-memorystore
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: {{ .Values.gcpredis.port }}
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: app-memorystore
subsets:
- addresses:
- ip: {{ .Values.gcpredis.ip }}
ports:
- port: {{ .Values.gcpredis.port }}
name: redis
protocol: TCP
Yes, you still need it.
Generally speaking, the Service is the name which is consumed by applications to connect to an Endpoint. Usually, a Service with a selector will automatically create a corresponding endpoint with the IP addresses of the Pods found by the selector.
When you define a Service without a selector you need to give the corresponding Endpoint of the same name so the Service has somewhere to go. This bit of information is in documentation but a bit buried. At https://kubernetes.io/docs/concepts/services-networking/service/#without-selectors it is mentioned in the second bullet point for headless services without selectors:
For headless services that do not define selectors, the endpoints controller does not create Endpoints records. However, the DNS system looks for and configures either:
CNAME records for ExternalName-type services.
A records for any Endpoints that share a name with the service, for all other types.