Helm nginx-ingress 404 error on all routes on GKE - kubernetes

I am trying to deploy an app to GKE. I am using nginx-ingress controller and I have a bunch of ingress rules for every route. For some reason which am not sure, I am getting a 404 error on all the routes/pages.
One of the ingress resource definitions is:-
{{- $fullName := include "nginx.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}-base
labels:
app: oxtrust
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/app-root: "/identity"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: oxtrust
servicePort: 8080
{{- end }}
On minikube the setup is working okay since I do need to use nginx-ingress.
I really don't understand what I am doing wrong. I read somewhere about mandatory and cloud-generic objects should be deployed but I am not sure about that.
Someone shed some light on this please. Thank you

Related

Issues migrating from v1beta to v1 for kubernetes ingress

In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1
This is the earlier manifest for the ingress file :
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
and after my changes it looks like this:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
But, after I made the changes and tried to deploy using helm, I receive this error:
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.
The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.
Here is the resolution:
$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
After this run a helm upgrade again.
Hi I had same problem one of the deployment was failing after we update our ingress files from apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 we had the same error with that app that was using helm
UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
Here is the solution install the mapkubeapis plugin with helm by running the below cms in your terminal
helm plugin install <https://github.com/hickeyma/helm-mapkubeapis>
Downloading and installing helm-mapkubeapis v0.0.15 ...
https://github.com/hickeyma/helm-mapkubeapis/releases/download/v0.0.15/helm-mapkubeapis_0.0.15_darwin_amd64.tar.gz
Installed plugin: mapkubeapis
helm plugin list
NAME VERSION DESCRIPTION
mapkubeapis 0.0.15 Map release deprecated Kubernetes APIs in-place
$ helm mapkubeapis release-name --namespace test-namespace --dry-run
the change the release-name to the name of your release/deployment that is failing. this cmd will list the files that have the old 1beta1
helm mapkubeapis release-name --namespace test-namespace
finally run this above cmd that will update the files and remove the depreciation. now go to your pipeline and run the deployment again and it will work this time.
After trying out a lot more stuff I just decided to finally use helm unistall to remove the deployments and the charts currently in the cluster.
I then simply tried to install with the new ingress manifest which I have mentioned in the question and that worked out and was finally able to deploy. So, the manifest itself which I had modified did not have any issues it seems.
Uninstalling and installing release worked for me.
1. helm uninstall <release>
2. helm install <release>
If you are doing the deployments through pipeline, you will have to manually perform step 1 and just re-trigger pipeline.

Helm upgrade error. atlassian-jira-software ingress

Trying to update helm ingress jira atlassian software.
I have such ingress template:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "atlassian-jira-software.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "atlassian-jira-software.name" . }}
chart: {{ template "atlassian-jira-software.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
service:
name: {{ $fullName }}
port:
name: http
{{- end }}
{{- end }}
Execute this command:
helm upgrade --dry-run -n atlassian jira .
The output of this command:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress "jira-atlassian-jira-software" in namespace "atlassian" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "jira"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "atlassian"
kubectl version --short
The output:
Client Version: v1.19.12 Server Version: v1.19.13-eks-8df270
Please, help me!
Was the ingress originally installed by Helm? Check out its "labels" section:
kubectl get ingress jira-atlassian-jira-software -o json
If you don't find the expected values (as described in the error messages) and you are sure you know what you are doing, you can try adding the labels yourself by editing the ingress:
kubectl edit ingress jira-atlassian-jira-software
If you do this, make sure that you run a diff before you do the helm upgrade again (to ensure that you see what is going to happen in advance and that you don't blow away anything you did not intend to):
helm diff upgrade -n atlassian jira .

Kubernetes write once, deploy anywhere

Consider that I have built a complex Kubernetes deployment/workload consisting of deployments, stateful sets, services, operators, CRDs with specific configuration etc… The workload/deployment was created by individual commands (kubectl create, helm install…)…
1-) Is there a way to dynamically (not manually) generate a script or a special file that describes the deployment and that could be used to redeploy/reinstall my deployment without going into each command one by one again.
2-) Is there a domain specific language (DSL) or something similar through which one can describe a Kubernetes deployment independently from the final target kubernetes cluster target, whether GKE, AWS, Azure, or on premises … kind of write once deploy anywhere…
Thanks.
I think Kustomize and Helm are your best bet. We can write the helm chart as a template and decide which template to use with Go-Templating and conditions.
For example, look at the below configuration file which has a few conditions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
annotations:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.hasSecretVolume }}
volumeMounts:
- name: {{ .Values.appName }}-volume-sec
mountPath: {{ .Values.secretVolumeMountPath }}
{{- end}}
{{- if or .Values.env.configMap .Values.env.secrets }}
envFrom:
{{- if .Values.env.configMap }}
- configMapRef:
name: {{ .Values.appName }}-env-configmap
{{- end }}
{{- if .Values.env.secrets }}
- secretRef:
name: {{ .Values.appName }}-env-secret
{{- end }}
{{- end }}
ports:
- containerPort: {{ .Values.containerPort }}
protocol: TCP
{{- if .Values.railsContainerHealthChecks}}
{{ toYaml .Values.railsContainerHealthChecks | indent 8 }}
{{- end}}
{{- if .Values.hasSecretVolume }}
volumes:
- name: {{ .Values.appName }}-volume-sec
secret:
secretName: {{ .Values.appName }}-volume-sec
{{- end}}
{{- if .Values.imageCredentials}}
imagePullSecrets:
- name: {{.Values.imageCredentials.secretName}}
{{- end}}
For instance, this condition checks for Secret volume and mounts it.
{{- if .Values.hasSecretVolume }}
In case you are interested in helm and generic templating, you can refer to this medium blog: https://medium.com/srendevops/helm-generic-spring-boot-templates-c9d9800ddfee

Trouble setting up secure connection for ingress behind ingress - bad gateway

I'm trying to set up a connection to a kubernetes cluster and I'm getting a 502 bad gateway error.
The cluster has an nginx ingress and a service (listening at both http and https). In addition the ingress is behind an nginx ingress service (I have nginx helm chart installed) with a static IP address.
I can see in the description of the cluster ingress that it knows the service's endpoints.
I see that the pods communicate successfully with each other (there are 3 pods), but I can't ping the external nginx from within a shell.
These are the cluster's ingress values in values.yaml:
ingress:
# If `true`, an Ingress is created
enabled: true
# The Service port targeted by the Ingress
servicePort: http
# Ingress annotations
annotations:
kubernetes.io/ingress.class: "nginx"
# Additional Ingress labels
labels: {}
# List of rules for the Ingress
rules:
-
# Ingress host
host: my-app.com
# Paths for the host
paths:
- /
# TLS configuration
tls:
- hosts:
- my-app.com
secretName: my-app-tls
When I go to my-app.com I see in the browser that I'm on a secure connection (the lock icon next to the URL), but like I said I get a 502 bad gateway error. If I replace the servicePort from http to https I get the '400 bad request' error.
How should I set up both ingresses to allow a secured connection to my app?
I tried all sorts of annotations, but always got the errors above.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Thank you!
The ingress definition that you have shared doesn't ingress definition and its values file.
Ingress definition should look something like below
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This gets executed if your values has ingress enabled=true.
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
The missing annotation was nginx.org/ssl-services, which accepts the list of secure services.

Multiple resources using single HELM template

We had been using single ingress per application(public) by default but with the recent requirement we need to expose (private) endpoint as well for some of the apps. That means we had a single template that looks like this:
templates/ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{ include "app.labels" . | indent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}
templates/cert.yaml
{{- if .Values.ingress.tls -}}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ .Values.ingress.name }}
namespace: {{ .Values.ingress.namespace }}
spec:
{{- range .Values.ingress.tls }}
secretName: {{ .secretName }}
duration: 24h
renewBefore: 12h
issuerRef:
name: {{ .issuerRef.name }}
kind: {{ .issuerRef.kind }}
dnsNames:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
{{- end -}}
{{- end -}}
And the values.yaml looks like this:
ingress:
enabled: true
name: apps-ingress
namespace: app1-namespace
annotations:
kubernetes.io/ingress.class: hybrid-external
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hosts:
- host: apps.test.cluster
paths:
- /
tls:
- secretName: app1-tls
issuerRef:
name: vault-issuer
kind: ClusterIssuer
hosts:
- "apps.test.cluster"
So, to accomodate the new setup. I have added the below block on values.yaml file.
ingress-private:
enabled: true
name: apps-ingress-private
namespace: app1-namespace
annotations:
kubernetes.io/ingress.class: hybrid-internal
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hosts:
- host: apps.internal.test.cluster
paths:
- /
tls:
- secretName: app1-tls
issuerRef:
name: vault-issuer
kind: ClusterIssuer
hosts:
- "apps.internal.test.cluster"
And duplicated both templates i.e templates/ingress-private.yaml and templates/certs-private.yaml, and is working fine but my question here is - is there a way using a single template for each ingress and certs and create conditional resource?
As I mentioned above, some apps need internal ingress and some don't. What I want to do is; make public ingress/certs as default and private as optional. i have been using {{- if .Values.ingress.enabled -}} option to validate if ingress is required but in 2 different files.
Also, in values.yaml file, rather than having 2 different block is there a way to use the list if multiple resources are required?
There are a couple of ways to approach this problem.
The way you have it now, with one file per resource but some duplication of logic, is a reasonably common pattern. It's very clear exactly what resources are being created, and there's less logic involved. The Go templating language is a little bit specialized, so this can be more approachable to other people working on your project.
If you do want to combine things together there are a couple of options. As #Matt notes in their comment, you can put multiple Kubernetes resources in the same file so long as they're separated by the YAML --- document separator.
{{/* ... arbitrary templating logic ... */}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
...
{{/* ... more logic ... */}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}-private
...
The only thing that matters here is that the output of the template is a valid multi-document YAML file. You can use the helm template command to see what comes out without actually sending it to the cluster.
This approach pairs well with having a list of configuration rules in your YAML file
ingresses:
- name: apps-ingress
annotations:
kubernetes.io/ingress.class: hybrid-external
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
- name: apps-ingress-private
annotations:
kubernetes.io/ingress.class: hybrid-internal
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
You can use the Go template range construct to loop over all of these. Note that this borrows the . special variable, so if you do refer to arbitrary other things in .Values you need to save away the current value of it.
{{- $top := . -}}
{{- range $ingress := .Values.ingresses -}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $ingress.name }}
annotations: {{- $ingress.annotations.toYaml | nindent 4 }}
...
{{ end }}
Thanks David for getting us started . But it did not work for me.
I used following to generate multiple resources in single YAML.
{{- if .Values.nfs }}
{{- $top := . -}}
{{- range $index, $pvc := .Values.nfs }}
{{- if $pvc.persistentClaim }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ $top.Chart.Name }}-{{ $top.Release.Name }}-nfs-{{ $index }}"
namespace: {{ $top.Release.Namespace }}
....
resources:
requests:
storage: {{ $pvc.persistentClaim.storageRequest }}
{{- end }}
{{- end }}
{{- end }}
helm docs show how to define and use range:
https://helm.sh/docs/chart_template_guide/variables/