I try to use ingress for loadbalancer of 2 services on Google Kubernetes engine:
here is ingress config for this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
where web is some example service from google samples:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
But the second service is ETCD cluster with NodePort service:
---
apiVersion: v1
kind: Service
metadata:
name: etcd-np
spec:
ports:
- port: 2379
targetPort: 2379
selector:
app: etcd
type: NodePort
But only first ingress rule works properly i see in logs:
ingress.kubernetes.io/backends: {"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
I etcd-np works properly it is not a problem of etcd , i think that the problem is that etcd server answers with 404 on GET / request and some healthcheck on ingress level does not allow to use it .
Thats why i have 2 questions :
1 ) How can I provide healthcheck urls for each backend path on ingress
2 ) How can I debug such issues . What I see now is
kubectl describe ingress basic-ingress
Name: basic-ingress
Namespace: default
Address: 4.4.4.4
Default backend: default-http-backend:80 (10.52.6.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* web:8080 (10.52.8.10:8080)
/v2/keys etcd-np:2379 (10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/target-proxy: k8s-tp-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/url-map: k8s-um-default-basic-ingress--ebfd7339a961462d
Events: <none>
But it does not provide me any info about this incident
UP
kubectl describe svc etcd-np
Name: etcd-np
Namespace: default
Labels: <none>
Annotations: Selector: app=etcd
Type: NodePort
IP: 10.4.7.20
Port: <unset> 2379/TCP
TargetPort: 2379/TCP
NodePort: <unset> 30195/TCP
Endpoints: 10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
According to the doc.
A Service exposed through an Ingress must respond to health checks
from the load balancer. Any container that is the final destination of
load-balanced traffic must do one of the following to indicate that it
is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness
probe. The Service exposed through an Ingress must point to the same
container port on which the readiness probe is enabled.
For example, suppose a container specifies this readiness probe:
...
readinessProbe:
httpGet:
path: /healthy
Then if the handler for the container's /healthy path returns an
HTTP 200 status, the load balancer considers the container to be
alive and healthy.
Now since ETCD has a health endpoint at /health the readiness probe will look like
...
readinessProbe:
httpGet:
path: /health
This becomes a bit tricky if mTLS is enabled in ETCD. To avoid that check the docs.
Related
I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster
Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer
I have a web service running on a port on my local network exposed at port 6003. I also have a Kubernetes Cluster running on a different machine on the same network that uses and Nginx Ingress to proxy to all the services in the cluster. How can I set up an ingress to proxy to the machine? I had a set up that worked. But now, I am either getting DNS errors on the nginx pod or the response times out in the browser and nothing happens.
Here is the manifest I have been using.
apiVersion: v1
kind: Service
metadata:
name: myservice-service
spec:
type: ExternalName
externalName: 192.xxx.xx.x
ports:
- name: myservice
port: 80
protocol: TCP
targetPort: 6003
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice-ip
subsets:
- addresses:
# list all external ips for this service
- ip: 192.xxx.xx.x
ports:
- name: myservice
port: 6003
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service.example.com
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: myservice-service
servicePort: 80
path: /
tls:
- secretName: secret-prod-tls
hosts:
- service.example.com
Edit for more information:
This manifest does work. What I realized is that you must specify https even though the ingress has a tls block. This still is showing Lua DNS errors in the Nginx-ingress pod though.
You MUST specify nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" in your ingress resource if upstream listening for HTTPS requests. So, this is related to backend, not ingress itself.
TLS configuration is for Ingress (frontend), and not for backend application.
You don't need ExternalName here. Usual headless service will do the job:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: http
port: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 172.17.0.5
ports:
- name: http
port: 80
I'm working on a Kubernetes cluster where I am directing service from GCloud Ingress to my Services. One of the services endpoints fails health check as HTTP but passes it as TCP.
When I change the health check options inside GCloud to be TCP, the health checks pass, and my endpoint works, but after a few minutes, the health check on GCloud resets for that port back to HTTP and health checks fail again, giving me a 502 response on my endpoint.
I don't know if it's a bug inside Google Cloud or something I'm doing wrong in Kubernetes. I have pasted my YAML configuration here:
namespace
apiVersion: v1
kind: Namespace
metadata:
name: parity
labels:
name: parity
storageclass
apiVersion: storage.k8s.io/v1
metadata:
name: classic-ssd
namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zones: us-central1-a
reclaimPolicy: Retain
secret
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ingress-nginx
data:
tls.crt: ./config/redacted.crt
tls.key: ./config/redacted.key
statefulset
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: parity
namespace: parity
labels:
app: parity
spec:
replicas: 3
selector:
matchLabels:
app: parity
serviceName: parity
template:
metadata:
name: parity
labels:
app: parity
spec:
containers:
- name: parity
image: "etccoop/parity:latest"
imagePullPolicy: Always
args:
- "--chain=classic"
- "--jsonrpc-port=8545"
- "--jsonrpc-interface=0.0.0.0"
- "--jsonrpc-apis=web3,eth,net"
- "--jsonrpc-hosts=all"
ports:
- containerPort: 8545
protocol: TCP
name: rpc-port
- containerPort: 443
protocol: TCP
name: https
readinessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
livenessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
volumeMounts:
- name: parity-config
mountPath: /parity-config
readOnly: true
- name: parity-data
mountPath: /parity-data
volumes:
- name: parity-config
secret:
secretName: parity-config
volumeClaimTemplates:
- metadata:
name: parity-data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "classic-ssd"
resources:
requests:
storage: 50Gi
service
apiVersion: v1
kind: Service
metadata:
labels:
app: parity
name: parity
namespace: parity
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
selector:
app: parity
ports:
- name: default
protocol: TCP
port: 80
targetPort: 80
- name: rpc-endpoint
port: 8545
protocol: TCP
targetPort: 8545
- name: https
port: 443
protocol: TCP
targetPort: 443
type: LoadBalancer
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-parity
namespace: parity
annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
tls:
secretName: tls-classic
hosts:
- www.redacted.com
rules:
- host: www.redacted.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /rpc
backend:
serviceName: parity
servicePort: 8545
Issue
I've redacted hostnames and such, but this is my basic configuration. I've also run a hello-app container from this documentation here for debugging: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Which is what the endpoint for ingress on / points to on port 8080 for the hello-app service. That works fine and isn't the issue, but just mentioned here for clarification.
So, the issue here is that, after creating my cluster with GKE and my ingress LoadBalancer on Google Cloud (the cluster-1 global static ip name in the Ingress file), and then creating the Kubernetes configuration in the files above, the Health-Check fails for the /rpc endpoint on Google Cloud when I go to Google Compute Engine -> Health Check -> Specific Health-Check for the /rpc endpoint.
When I edit that Health-Check to not use HTTP Protocol and instead use TCP Protocol, health-checks pass for the /rpc endpoint and I can curl it just fine after and it returns me the correct response.
The issue is that a few minutes after that, the same Health-Check goes back to HTTP protocol even though I edited it to be TCP, and then the health-checks fail and I get a 502 response when I curl it again.
I am not sure if there's a way to attach the Google Cloud Health Check configuration to my Kubernetes Ingress prior to creating the Ingress in kubernetes. Also not sure why it's being reset, can't tell if it's a bug on Google Cloud or something I'm doing wrong in Kubernetes. If you notice on my statefulset deployment, I have specified livenessProbe and readinessProbe to use TCP to check the port 8545.
The delay of 650 seconds was due to this ticket issue here which was solved by increasing the delay to greater than 600 seconds (to avoid mentioned race conditions): https://github.com/kubernetes/ingress-gce/issues/34
I really am not sure why the Google Cloud health-check is resetting back to HTTP after I've specified it to be TCP. Any help would be appreciated.
I found a solution where I added a new container for health check on my stateful set on /healthz endpoint, and configured the health check of the ingress to check that endpoint on the 8080 port assigned by kubernetes as an HTTP type of health-check, which made it work.
It's not immediately obvious why the reset happens when it's TCP.
I have a Kubernetes service that exposes two ports as follows
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
And here is the ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: f5
virtual-server.f5.com/http-port: "80"
virtual-server.f5.com/ip: controller-default
virtual-server.f5.com/round-robin: round-robin
creationTimestamp: 2018-10-05T18:54:45Z
generation: 2
name: m-ingress
namespace: m-ns
resourceVersion: "39557812"
selfLink: /apis/extensions/v1beta1/namespaces/m-ns
uid: 20241db9-c8d0-11e8-9fac-0050568d4d4a
spec:
rules:
- host: www.myhost.com
http:
paths:
- backend:
serviceName: m-svc
servicePort: 8080
path: /first/path
- backend:
serviceName: m-svc
servicePort: 8080
path: /second/path
status:
loadBalancer:
ingress:
- ip: 172.31.74.89
But when I go to www.myhost.com/first/path I end up at the service that is listening on port 8888 of m-svc. What might be going on?
Another piece of information is that I am sharing a service between two ingresses that point to different ports on the same service, is this a problem? There is a different ingress port the port 8888 on this service which works fine
Also I am using an F5 controller
After a lot of time investigating this, it looks like the root cause is in the F5s, it looks like because the name of the backend (Kubernetes service) is the same, it only creates one entry in the pool and routes the requests to this backend and the one port that gets registered in the F5 policy. Is there a fix for this? A workaround is to create a unique service for each port but I dont want to make this change , is this possible at the F5 level?
From what I see you don't have a Selector field in your service. Without it, it will not forward to any backend or pod. What makes you think that it's going to port 8888? What's strange is that you have Endpoints in your service. Did you manually create them?
The service would have to be something like this:
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
Then in your deployment definition:
selector:
matchLabels:
app: my-application
Or in a pod:
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
app: my-application
You should also be able to describe your Endpoints:
$ kubectl describe endpoints m-svc
Name: m-svc
Namespace: default
Labels: app=my-application
Annotations: <none>
Subsets:
Addresses: x.x.x.x
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
first 8080 TCP
second 8081 TCP
Events: <none>
Your Service appears to be what is called a headless service: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services. This would explain why the Endpoints was created automatically.
Something is amiss because it should be impossible for your HTTP request to arrive at you pods without the .spec.selector populated.
I suggest deleting the Service you created and delete the Endpoints with the same name and then recreate the Service with type=ClusterIP and the spec.selector properly populated.
I'm trying to create a simple nginx service on GKE, but I'm running into strange problems.
Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do curl myservice:8080 inside of the pod and see the nginx home screen)
But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
After a while, this is what my ingress status looks like:
$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in default backend - 404.
Any ideas why?
The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The containers section for the nginx should also specify:
livenessProbe:
httpGet:
path: /
port: 80
The GET / on port 80 is the default configuration, and can be changed.