I am trying to build a kubernetes environment from scratch using Google's Deployment Manager and Kubernetes Engine. So far, the cluster is configured to host two apps. Each app is served by an exclusive service, which in turn receives traffic from an exclusive ingress. Both ingresses are created wit the same Deployment Manager jinja template:
- name: {{ NAME_PREFIX }}-ingress
type: {{ CLUSTER_TYPE_BETA }}:{{ INGRESS_COLLECTION }}
metadata:
dependsOn:
- {{ properties['cluster-type-v1beta1-extensions'] }}
properties:
apiVersion: extensions/v1beta1
kind: Ingress
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ NAME_PREFIX }}
labels:
app: {{ env['name'] }}
deployment: {{ env['deployment'] }}
spec:
rules:
- host: {{ properties['host'] }}
http:
paths:
- backend:
serviceName: {{ NAME_PREFIX }}-svc
servicePort: {{ properties['node-port'] }}
The environment deployment works fine. However, I was hoping that both ingresses would be bound to the same external address, which is not happening. How could I setup the template so that this restriction is enforced? More generally, is it considered a kubernetes bad practice to spawn one ingress for each one of the environment's host-based rules?
Each ingress will create its own HTTP(s) load balancer. If you want a single IP, define a single ingress with multiple host paths, one for each service
Related
My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.
I am following this image architecture from K8s
However I can not seem to connect to socket.io server from within the cluster using the service name
Current situation:
From POD B
Can connect directly to App A's pod using WS ( ws://10.10.10.1:3000 ) ✅
Can connect to App A's service using HTTP ( http://orders:8000 ) ✅
Can not connect to App A's service using WS ( ws://orders:8000 ) ❌
From outside world / Internet
Can connect to App A's service using WS ( ws://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
Can connect to App A's service using HTTP ( http://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
My current service configuration
spec:
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 3000
selector:
app: orders
clusterIP: 172.20.115.234
type: ClusterIP
sessionAffinity: None
My Ingress Helm chart
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "app.name" $ }}-backend
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
ingress.kubernetes.io/auth-response-headers: authorization
labels:
{{- include "api-gw.labels" $ | indent 4 }}
spec:
rules:
- host: {{ .Values.deploy.host | quote }}
http:
paths:
- path: /socket/events
backend:
serviceName: orders
servicePort: 8000
My Service Helm chart
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.name" . }}
spec:
{{ if not $isDebug -}}
selector:
app: {{ template "app.name" . }}
{{ end -}}
type: NodePort
ports:
- name: http
port: {{ template "app.svc.port" . }}
targetPort: {{ template "app.port" . }}
nodePort: {{ .Values.service.exposedPort }}
protocol: TCP
# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
# {{ default "default" .Chart.Name }}
# {{- end -}}
# {{- define "app.port" -}}
# 3000
# {{- end -}}
# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}
The services DNS name must be set in your container to access its VIP address.
Kubernetes automatically sets environmental variables in all pods which have the same selector as the service.
In your case, all pods with selector A, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.
The other pod with selector B, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.
Here is the k8s documentation related to your problem.
To solve this, you can setup a DNS service, which k8s offers as a cluster addon.
Just follow the documentation.
I have recently deployed my .NET Core 3.1 based API into an AKS cluster and used an NGINX ingress to access it. I added an A record to my domain DNS to point to the public IP address provided by the NGINX controller. I also set up TLS in the ingress manifest then set up a DevOps pipeline to use Helm to deploy my API to AKS. When I browse my domain, Swagger loads but when I try calling the API, Swagger displays an undocumented code where the details field contains TypeError: Failed to fetch. The API is simply supposed to return an OK.
I thought it was a CORS-related issue so I enabled CORS both on the API level and on the ingress level.
Here is my ingress manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
{{ $fullname := include "testchart.fullname" .}}
{{ $serviceport := .Values.service.port }}
{{ $ns := .Values.namespace }}
{{ with .Values.ingress }}
metadata:
name: {{ $fullname }}-ing
namespace: {{ $ns }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://api.mydomain.io"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS, DELETE, PATCH
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- {{ .host }}
secretName: {{ .tls.secretname }}
rules:
- host: {{ .host }}
http:
paths:
- backend:
serviceName: {{ $fullname }}-svc
servicePort: {{ $serviceport }}
{{ end }}
When I try to hit the API in Postman, I get a warning saying Unable to verify the first certificate. The certificate I am using doesn't seem to have an issue as I can load the address in all browsers without getting an invalid or non-secure certificate warning.
Something I noticed is when I changed the type of the Kubernetes service from ClusterIP to LoadBalancer and browsed the public IP of the service, I managed to get an OK after calling the same API on Swagger.
Any idea what could be wrong?
Edit:
I just removed the tls section from the ingress manifest and deployed the API. It worked. So it seems to have something to do with HTTPS.
I want to deploy a gRPC service to Azure Kubernetes Service. I have already depoyed RESTful services using Helm charts but gRPC service is throwing "connection timed out" error.
I have already tried everything said in the NGINX and HELM documentation but nothing worked. The certificate is self signed. I have tried all permutation and combination of annotations :p
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
ports:
- port: 50051
protocol: TCP
targetPort: 50051
name: grpc
selector:
app: {{ template "fullname" . }}
type: NodePort
ingress.yaml
{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/grpc-backend: "true"
nginx.org/grpc-services: {{ template "fullname" . }}
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
secretName: aks-ingress-tls
rules:
- http:
proto: h2
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: grpc
proto: h2
path: /{servicename}-grpc(/|$)(.*)
{{ end }}
Tried this also- still not working-
{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- secretName: aks-ingress-tls
rules:
- http:
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: 50051
path: /servicename-grpc(/|$)(.*)
{{ end }}
It looks like you are missing an annotation on your ingress.
ingress.yaml - snippet
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# This annotation matters!
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
According to this snippet from the official Kubernetes nginx ingress documentation:
Backend Protocol
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP
By default NGINX uses HTTP.
Example:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
As an aside, there's a chance you might need to specify GRPCS instead of GRPC since it appears you are using SSL.
Another thing to call out is that the docs mention that this annotation replaces 'secure-backends' in older versions, which could be where you found the grpc-backend annotation you are currently using.
So i'm deploying my application stack on kubernetes sing helm charts and now i need to add some dependant server ip's and hostnames inside my pods /etc/hosts file so need help on this scenario
A helm templated solution to the original question. I tested this with helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the hostAliases section is omitted when the template is rendered.
As standing in documentation you can add extra hosts to POD by using host aliases feature
Example from docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
Kubernetes provides a DNS service that all pods get to use. In turn, you can define an ExternalName service that just defines a DNS record. Once you do that, your pods can talk to that service the same way they'd talk to any other Kubernetes service, and reach whatever server.
You could deploy a set of ExternalName services globally. You could do it in a Helm chart too, if you wanted, something like
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-foo
spec:
type: ExternalName
externalName: {{ .Values.fooHostname }}
The practice I've learned is that you should avoid using /etc/hosts if at all possible.