Kubernetes GKE Ingress : 502 Server Error - kubernetes

I'm currently setting up a web app using google cloud platform but I ran into an issue while deploying with the GKE ingress controller.
I'm unable to access the app using my sub domain name. The page is showing this message:
502 Server Error
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
Despite having configured an health check the ingress seems to still not respond properly.
In the meantime my SSL certificate is working properly for my subdomain name.
This is my ingress configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "myapp-static-addr"
networking.gke.io/v1beta1.FrontendConfig: "frontend-ingress-config"
spec:
rules:
- host: test.myapp.com
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3333
tls:
- secretName: stage-ssl"
and this is my service:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: default
annotations:
cloud.google.com/backend-config: '{"default": "frontend-config"}'
spec:
type: NodePort
selector:
app: frontend-server
ports:
- port: 3333
protocol: TCP
targetPort: 3000
and finally my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend-server
spec:
replicas: 1
selector:
matchLabels:
app: frontend-server
template:
metadata:
labels:
app: frontend-server
spec:
containers:
- name: client-ssr
image: eu.gcr.io/myapp-test/client-ssr
ports:
- name: client-port
containerPort: 3000
resources:
requests:
cpu: "0.35"
limits:
cpu: "0.55"
env:
- name: CONFIG_ENV
value: "STAGE"
imagePullPolicy: Always
volumeMounts:
- name: certificate
mountPath: "/etc/certificate"
readOnly: true
volumes:
- name: certificate
secret:
secretName: stage-ssl
And this in the ingress describe:
Name: frontend-ingress
Namespace: default
Address: 34.106.6.15
Default backend: default-http-backend:80 (10.26.31.88:8080)
TLS:
stage-ssl terminates
Rules:
Host Path Backends
---- ---- --------
test.myapp.com
/ frontend-service:3333 (10.22.46.111:3000)
Annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-addr
ingress.kubernetes.io/backends: {"k8s-be-32171--41df3ab30d90ff92":"HEALTHY","k8s1-41df3ab3-default-frontend-service-3333-5186e808":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/redirect-url-map: k8s2-rm-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/ssl-cert: k8s2-cr-opm63ww1-f42czv69pq6f2emd-5bd4c7395be5bd4e
ingress.kubernetes.io/https-target-proxy: k8s2-ts-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/target-proxy: k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/url-map: k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.global-static-ip-name":"myapp-static-addr","networking.gke.io/v1beta1.FrontendConfig":"frontend-ingress-config"},"name":"frontend-ingress","namespace":"default"},"spec":{"rules":[{"host":"test.myapp.com","http":{"paths":[{"backend":{"serviceName":"frontend-service","servicePort":3333},"path":"/"}]}}],"tls":[{"secretName":"stage-ssl"}]}}
networking.gke.io/v1beta1.FrontendConfig: frontend-ingress-config
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18m loadbalancer-controller UrlMap "k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller TargetProxy "k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller ForwardingRule "k8s2-fr-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller TargetProxy "k8s2-ts-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal IPChanged 18m loadbalancer-controller IP is now 34.106.6.15
Normal Sync 18m loadbalancer-controller ForwardingRule "k8s2-fs-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 16m loadbalancer-controller UrlMap "k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p" updated
Normal Sync 16m loadbalancer-controller TargetProxy "k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p" updated
Normal Sync 2m18s (x8 over 19m) loadbalancer-controller Scheduled for sync
How can I make my web app working ?
Thank You in advance.

I believe GCE Ingress must have defaultBackend apparently, see my question Is defaultBackend mandatory for Ingress (gce)?

Related

How to create ingress-nginx for my kubernetes deployment and service?

I am able to access my django app deployment using LoadBalancer service type but I'm trying to switch to ClusterIP service type and ingress-nginx but I am getting 503 Service Temporarily Unavailable when I try to access the site via the host url. Describing the ingress also shows error: endpoints "django-service" not found and error: endpoints "default-http-backend" not found. What am I doing wrong?
This is my service and ingress yaml:
---
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- django.example.com
rules:
- host: django.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: django-service
port:
number: 80
ingressClassName: nginx
kubectl get all
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/django-app-5bdd8ffff9-79xzj 1/1 Running 0 7m44s
pod/postgres-58fffbb5cc-247x9 1/1 Running 0 7m44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/django-service ClusterIP 10.233.29.58 <none> 80/TCP 7m44s
service/pg-service ClusterIP 10.233.14.137 <none> 5432/TCP 7m44s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/django-app 1/1 1 1 7m44s
deployment.apps/postgres 1/1 1 1 7m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/django-app-5bdd8ffff9 1 1 1 7m44s
replicaset.apps/postgres-58fffbb5cc 1 1 1 7m44s
describe ingress
$ kubectl describe ing django-ingress
Name: django-ingress
Labels: <none>
Namespace: django
Address: 10.10.30.50
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes django.example.com
Rules:
Host Path Backends
---- ---- --------
django.example.com
/ django-service:80 (<error: endpoints "django-service" not found>)
Annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
I think you forgot to make the link with your deployment in your service.
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
selector:
app: your-deployment-name
Your label must be set in your deployment as well:
spec:
selector:
matchLabels:
app: your-deployment-name
template:
metadata:
labels:
app: your-deployment-name

Cert Manager Challenge Pending Kubernetes

I have it working on one site application I already set up and now I am just trying to replicate the exact same thing for a different site/domain in another namespace.
So staging.correct.com is my working https domain
and staging.example.com is my not working https domain (http works - just not https)
When I do the following it shows 3 certs, the working one for correct and then 2 for the example.com when it should only have one for example:
kubectl get -A certificate
correct staging-correct-com True staging-correct-com-tls 10d
example staging-example-com False staging-example-com-tls 16h
example staging-example-website-com False staging-example-com-tls 17h
When I do:
kubectl get -A certificaterequests
It shows 2 certificate requests for the example
example staging-example-com-nl46v False 15h
example staging-example-website-com-plhqb False 15h
When I do:
kubectl get ingressroute -A
NAMESPACE NAME AGE
correct correct-ingress-route 10d
correct correct-secure-ingress-route 6d22h
kube-system traefik-dashboard 26d
example example-website-ingress-route 15h
example example-website-secure-ingress-route 15h
routing dashboard 29d
routing traefik-dashboard 6d21h
When I do:
kubectl get secrets -A (just showing the relevant ones)
correct default-token-bphcm kubernetes.io/service-account-token
correct staging-correct-com-tls kubernetes.io/tls
example default-token-wx9tx kubernetes.io/service-account-token
example staging-example-com-tls Opaque
example staging-example-com-wf224 Opaque
example staging-example-website-com-rzrvw Opaque
Logs from cert manager pod:
1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="staging.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-bqjsj" "related_resource_namespace”=“example” "related_resource_version"="v1beta1" "resource_kind"="Challenge" "resource_name"="staging-example-com-ltjl6-1661100417-771202110" "resource_namespace”=“example” "resource_version"="v1" "type"="HTTP-01"
When I do:
kubectl get challenge -A
example staging-example-com-nl46v-1661100417-2848337980 staging.example.com 15h
example staging-example-website-com-plhqb-26564845-3987262508 pending staging.example.com
When I do: kubectl get order -A
NAMESPACE NAME STATE AGE
example staging-example-com-nl46v-1661100417 pending 17h
example staging-example-website-com-plhqb-26564845 pending 17h
My yml files:
My ingress route:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: web
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- web
routes:
- match: Host(`staging.example.com`)
middlewares:
- name: https-only
kind: Rule
services:
- name: example-website
namespace: example
port: 80
my issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer-staging
namespace: example
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: example#example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: staging-example-com-tls
# Enable the HTTP-01 challenge provider
solvers:
# An empty 'selector' means that this solver matches all domains
- http01:
ingress:
class: traefik
my middleware:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-only
namespace: example
spec:
redirectScheme:
scheme: https
permanent: true
my secure ingress route:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-secure-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- websecure
routes:
- match: Host(`staging.example.com`)
kind: Rule
services:
- name: example-website
namespace: example
port: 80
tls:
domains:
- main: staging.example.com
options:
namespace: example
secretName: staging-example-com-tls
my service:
apiVersion: v1
kind: Service
metadata:
namespace: example
name: 'example-website'
spec:
type: ClusterIP
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
- protocol: TCP
name: https
port: 443
targetPort: 80
selector:
app: 'example-website'
my solver:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: staging-example-com
namespace: example
spec:
secretName: staging-example-com-tls
issuerRef:
name: example-issuer-staging
kind: Issuer
commonName: staging.example.com
dnsNames:
- staging.example.com
my app:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: example
name: 'example-website'
labels:
app: 'example-website'
tier: 'frontend'
spec:
replicas: 1
selector:
matchLabels:
app: 'example-website'
template:
metadata:
labels:
app: 'example-website'
spec:
containers:
- name: example-website-container
image: richarvey/nginx-php-fpm:1.10.3
imagePullPolicy: Always
env:
- name: SSH_KEY
value: 'secret'
- name: GIT_REPO
value: 'url of source code for site'
- name: GIT_EMAIL
value: 'example#example.com'
- name: GIT_NAME
value: 'example'
ports:
- containerPort: 80
How can I delete all these secrets, orders, certificates and stuff in the example namespace and try again? Does cert-manager let you do this without restarting them continuously?
EDIT:
I deleted the namespace and redeployed, then:
kubectl describe certificates staging-example-com -n example
Spec:
Common Name: staging.example.com
Dns Names:
staging.example.com
Issuer Ref:
Kind: Issuer
Name: example-issuer-staging
Secret Name: staging-example-com-tls
Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:06Z
Message: Issuing certificate as Secret does not contain a certificate
Reason: MissingData
Status: False
Type: Ready
Last Transition Time: 2020-09-26T21:25:07Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: staging-example-com-gnbl4
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 3m10s cert-manager Issuing certificate as Secret does not exist
Normal Reused 3m10s cert-manager Reusing private key stored in existing Secret resource "staging-example-com-tls"
Normal Requested 3m9s cert-manager Created new CertificateRequest resource "staging-example-com-qrtfx"
So then I did:
kubectl describe certificaterequest staging-example-com-qrtfx -n example
Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:10Z
Message: Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: "pending"
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 8m17s cert-manager Created Order resource example/staging-example-com-qrtfx-1661100417
Normal OrderPending 8m17s cert-manager Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: ""
So I did:
kubectl describe challenges staging-example-com-qrtfx-1661100417 -n example
Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 11m cert-manager Challenge scheduled for processing
Normal Presented 11m cert-manager Presented challenge using HTTP-01 challenge mechanism
I figured it out. The issue seems to be that IngressRoute (which is used in traefik) does not work with cert mananger. I just deployed this file, then the http check was confirmed, then I could delete it again. Hope this helps others with same issue.
Seems cert manager does support IngressRoute which is in Traefik? I opened the issue here so let's see what they say: https://github.com/jetstack/cert-manager/issues/3325
kubectl apply -f example-ingress.yml
File:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: example
name: example-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
spec:
rules:
- host: staging.example.com
http:
paths:
- path: /
backend:
serviceName: example-website
servicePort: 80
tls:
- hosts:
- staging.example.com
secretName: staging-example-com-tls

Docker for mac 80 port is not useful with k8s + ingress, how can I fixed them?

I used this config file to deploy a app , but I can't visit the url(I had add this domain to my /etc/hosts):
http://gogs.app, and it has a response with the 307 status of https://gogs.app
I am using docker for mac 18.09.2
And k8s is installed by docker for mac.
Problem:
I used this config file to deploy a app , but I can't visit the url(I had add this domain to my /etc/hosts):
http://gogs.app, and it has a response with the 307 status of https://gogs.app
☁ gogs-k8s kubectl get pods
NAME READY STATUS RESTARTS AGE
gogs-64cbd8c6f7-9qjbh 1/1 Running 0 32m
gogs-64cbd8c6f7-prcnf 1/1 Running 0 32m
gogs-64cbd8c6f7-rgcpn 1/1 Running 0 32m
☁ gogs-k8s kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.146.125 localhost 80:31255/TCP,443:30359/TCP 10d
☁ gogs-k8s kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gogs-services LoadBalancer 10.109.151.183 <pending> 80:31053/TCP,2222:30591/TCP 36m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
this is my gogs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gogs
spec:
replicas: 3
selector:
matchLabels:
app: gogs
template:
metadata:
labels:
app: gogs
spec:
containers:
- name: gogs
image: gogs/gogs:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "200Mi"
cpu: "500m"
ports:
- containerPort: 3000
name: http
- containerPort: 22
name: ssh
volumeMounts:
- mountPath: /data
name: gogs-repo
volumes:
- name: gogs-repo
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: gogs
name: gogs-services
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: ssh
port: 2222
targetPort: 22
selector:
app: gogs
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gogs
spec:
rules:
- host: gogs.app
http:
paths:
- path: /
backend:
serviceName: gogs
servicePort: 80
visit the url: http://gogs.app correct.
If you don't want redirect to HTTPS then try using this manifest for ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gogs
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: gogs.app
http:
paths:
- path: /
backend:
serviceName: gogs
servicePort: 80
Since you deployed your k8s with Docker for Mac, I am assuming you are attempting to get your application working on a local machine. The problem is that you have defined host: gogs.app in your nginx ingress. You should change your ingress definition to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gogs
spec:
rules:
- host: localhost
http:
paths:
- path: /
backend:
serviceName: gogs
servicePort: 80
With Docker for Mac, it seems like your ingress-nginx is exposed externally on the localhost interface; thus it would not allow access externally. If you wish to use your domain name, consider deploying this in a cloud environment with static public IP addressing, and create the appropriate DNS records for it.

I can't access https urls served on google cloud kubernetes

All,
I followed this tutorial: https://github.com/ahmetb/gke-letsencrypt.
I have an ingress setup for kubernetes in Google Cloud, I have a static IP address and the secrets are created.
This is my ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloweb
annotations:
kubernetes.io/ingress.global-static-ip-name: helloweb-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
labels:
app: hello
spec:
backend:
serviceName: helloweb-backend
servicePort: 8080
tls:
- secretName: dogs-com-tls
hosts:
- app-solidair-vlaanderen.com
I can access http://app-solidair-vlaanderen.com, but not the https url.
If I call describe ingress I get this output:
Name: helloweb
Namespace: default
Address: 35.190.68.173
Default backend: helloweb-backend:8080 (10.32.0.17:8080)
TLS:
dogs-com-tls terminates app-solidair-vlaanderen.com
Rules:
Host Path Backends
---- ---- --------
app-solidair-vlaanderen.com
/.well-known/acme-challenge/Q8kcFSZ0ZUJO58xZyVbK6s-cJIWu-EgwPcDd8NFyoXQ cm-acme-http-solver-mhqnf:8089 (<none>)
Annotations:
url-map: k8s-um-default-helloweb--17a833239f9491d9
backends: {"k8s-be-30819--17a833239f9491d9":"Unknown","k8s-be-32482--17a833239f9491d9":"HEALTHY"}
forwarding-rule: k8s-fw-default-helloweb--17a833239f9491d9
target-proxy: k8s-tp-default-helloweb--17a833239f9491d9
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 45m loadbalancer-controller default/helloweb
Normal CREATE 44m loadbalancer-controller ip: 35.190.68.173
Warning Sync 7m (x22 over 28m) loadbalancer-controller Error during sync: error while evaluating the ingress spec: could not find service "default/cm-acme-http-solver-mhqnf"
Does someone know what I'm missing?
You have some mess up in your ingress definition, why the hosts is under the tls?
Here is an example that is working for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.ingressName }}-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.staticIpName }}-static-ip
kubernetes.io/ingress.allow-http: "false"
labels:
...
spec:
tls:
- secretName: sslcerts
rules:
- host: {{ .Values.restApiHost }}
http:
paths:
- backend:
serviceName: rest-api-internal-service
servicePort: 80

GKE Ingress - Simple nginx example yet getting error "Could not find nodeport for backend"

I'm trying to create a simple nginx service on GKE, but I'm running into strange problems.
Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do curl myservice:8080 inside of the pod and see the nginx home screen)
But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
After a while, this is what my ingress status looks like:
$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in default backend - 404.
Any ideas why?
The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The containers section for the nginx should also specify:
livenessProbe:
httpGet:
path: /
port: 80
The GET / on port 80 is the default configuration, and can be changed.