Hope you are all well,
I am currently trying to rollout the awx-operator on to a Kubernetes Cluster and I am running into a few issues with going to the service from outside of the cluster.
Currently I have the following services set up:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx NodePort 10.102.30.6 <none> 8080:32155/TCP 110m
awx-operator NodePort 10.110.147.152 <none> 80:31867/TCP 125m
awx-operator-metrics ClusterIP 10.105.190.155 <none> 8383/TCP,8686/TCP 3h17m
awx-postgres ClusterIP None <none> 5432/TCP 3h16m
awx-service ClusterIP 10.102.86.14 <none> 80/TCP 121m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
I did set up a NodePort which is called awx-operator. I did attempt to create an ingress to the application. You can see that below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awx-ingress
spec:
rules:
- host: awx.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: awx
port:
number: 80
When I create the ingress, and then run kubectl describe ingress, I get the following output:
Name: awx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
awx.mycompany.com
/ awx:80 (10.244.1.8:8080)
Annotations: <none>
Events: <none>
Now I am not too sure whether the default-http-backend:80 error is a red-herring as I have seen this in a number of places and they don't seem too worried about it, but please correct me if I am wrong.
Please let me know whether there is anything else I can do to troubleshoot this, and I will get back to you as soon as I can.
You are right and the blank address is the issue here. In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster.
Bare-metal environments on the other hand lack this option, requiring from you a slightly different setup to offer the same kind of access to external consumers:
This means you have to do some additional gymnastics to make the ingress work. And you have basically two main options here (all well described here):
A pure software solution: MetalLB
Over the NodePort service.
What is happening here is that you basically creating a service type NodePort with selector that matches your ingress controller pod and then it's routes the traffic accordingly to your ingress object:
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
Full nginx deployment that conatains that service can be found here.
If you wish to skip the ingress you might be just using the nodePort service awx and reach it directly.
I am using Kubernetes 1.22 and the operator version 0.14.0.
I have a Kubernetes baremetal installation and I have to use ingress. The ingress provided with the operator is not compatible with the version of kubernetes I am using so I had to define it myself.
I am using Ansible but you could work out the values for the variables :)
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ awx_deployment_name }}-ingress-unmanaged
namespace: {{ awx_namespace }}
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
rules:
- host: {{ awx_host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ awx_deployment_name }}-service
port:
number: 80
tls:
- hosts:
- {{ awx_host }}
secretName: {{ awx_tls_secret}}
you can simply expose the deployment to a service type LoadBalancer
the following command creates a service with a type loadBalancer
kubectl expose deployment awx-demo --port=80 --target-port=8052 --name=awx-lb --type=LoadBalancer
Related
I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster
Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer
Cluster information:
Installation Method: kubeadm
Kubernetes: 1.19.2
Master & Nodes: Ubuntu 20.04.1 (Oracle Virutalbox)
Docker: 19.03.12
Calico: 3.16.1
Ingress : Bare-metal - 0.40.1
I want to access the Kubernetes dashboard from my laptop using ingress without proxy?
Can anyone help me with the steps... ( I tried multiple ways with the help of the internet... not sure where I am missing?)
Note: As per discussion forums I have added "hostNetwork: true" under the deployment section in ingress YAML to resolve "not working without host parameter" and commented "type: NodePort".
Updated info:
I have created ingress-controller as daemon instead of deployments/pod - this helps in accessing directly with worker IPs. (this is what I am expecting - but unable to access kubernetes dashboard as it is in different namespace)
Ingress yaml: this is running in default namespace
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kdash-in-ns
port:
number: 443
kdash-in-ns yaml - svc with External Name
kind: Service
apiVersion: v1
metadata:
name: kdash-in-ns
namespace: default
spec:
type: ExternalName
externalName: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
ports:
- name: https
port: 443
Below details about kdash-in-ns svc with ExternalName
dockeras#ubuntu3:~/simplek8s/kubernetes/yamls/ingress-demo$ kubectl describe svc kdash-in-ns
Name: kdash-in-ns
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl describe for the updated ingress route: in this i have ngnix - which is working fine (i guess both ingress and nginx are in same namespace.. getting error for dashboard - as it in different namespace (kubernetes-dasbhoard))
dockeras#ubuntu3:~$ kubectl describe ing nginx-ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: nginx-ingress
Namespace: default
Address: 192.168.1.31,192.168.1.32
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx-deploy-main:80 )
/foo kubernetes-dashboard:443 (<error: endpoints "kubernetes-dashboard" not found>)
/dashboard kdash-in-ns:443 (<error: endpoints "kdash-in-ns" not found>)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 4m40s nginx-ingress-controller Ingress default/nginx-ingress
When I tried the same URLs in browser below are the responses (One of my worker iP - 192.168.1.31)
192.168.1.31/nginx - responds with nginx default page (pod - nginx-deploy-main)
192.168.1.31/foo - error page - 503 service temporarily Unavailable (default nginx)
192.168.1.31/dashboard - 504 Gateway Time-out (default nginx)
running svc, pods:
All Pods and svcs
If I understand correctly, you want to access kubernetes service (dashboard) from outside cluster. You may deploy metallb LoadBalancer and manage a pull of IPs from external cluster network, assigned to your cluster.
So, you can assign an IP and a LoadBalancer through which you will access your service. Below is an example for mssql server, but you can easily adapt it to your needs with dashboard:
apiVersion: v1
kind: Service
metadata:
name: sql-server-lb
namespace: database-server
annotations:
metallb.universe.tf/address-pool: default
spec:
selector:
app: sql-server
ports:
- port: 1433
targetPort: 1433
type: LoadBalancer
https://metallb.universe.tf/
FYI:
I run Kubernetes on docker desktop for mac
The website based on Nginx image
I run 2 simple website deployments on Kubetesetes and use the NodePort service. Then I want to make routing to the website using ingress. When I open the browser and access the website, I get an error 503 like images below. So, how do I fix this error?
### Service
apiVersion: v1
kind: Service
metadata:
name: app-svc
labels:
app: app1
spec:
type: NodePort
ports:
- port: 80
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2-svc
labels:
app: app2
spec:
type: NodePort
ports:
- port: 80
selector:
app: app2
### Ingress-Rules
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app-svc
servicePort: 30092
- path: /app2
backend:
serviceName: app2-svc
servicePort: 30936
Yes, i end up with same error. once i changed the service type to "ClusterIP", it worked fine for me.
Found this page after searching for a solution to nginx continually returning 503 responses despite the service names all being configured correctly. The issue for me was that I had configured the kubernetes service in a specific namespace, but did not update the ingress component to be in the same namespace. Despite being such a simple solution it was not at all obvious!
I advise you to use service type ClusterIP
Take look on this useful article: services-kubernetes.
If you use Ingress you have to know that Ingress isn’t a type of Service, but rather an object that acts as a reverse proxy and single entry-point to your cluster that routes the request to different services. The most basic Ingress is the NGINX Ingress Controller, where the NGINX takes on the role of reverse proxy, while also functioning as SSL. On below drawing you can see workflow between specific components of environment objects.
Ingress is exposed to the outside of the cluster via ClusterIP and Kubernetes proxy, NodePort, or LoadBalancer, and routes incoming traffic according to the configured rules.
Example of service definition:
---
apiVersion: v1
kind: Service
metadata:
name: app-svc
labels:
app: app1
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2-svc
labels:
app: app2
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: app2
Let me know if it helps.
First, You need to change the service type of your app-service to ClusterIP, because the Ingress object is going to access these Pods(services) from inside the cluster. (ClusterIP service is used when you want to allow accessing a pod inside a cluster).
Second, Make sure the services are running by running kubectl get services and check the running services names against the names in backend section in Ingress routing rules
Little late to this journey but here is my comment on the issue.
I was having the same issue and having the same environment. (Docker Desktop-based Kubernetes with WSL2)
a couple of items probably can help.
add the host entry in the rules section. and the value will be kubernetes.docker.internal like below
rules:
- host: kubernetes.docker.internal
http:
paths:
- path....
check the endpoints using kubectl get services to confirm that the same port is in your ingress rule definition for each of those backend services.
backend:
service:
name: my-apple-service
port:
number: 30101
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-apple-service ClusterIP 10.106.121.95 <none> 30101/TCP 9h
my-banada-service ClusterIP 10.99.192.112 <none> 30101/TCP 9h
Notes
I am trying to deploy a service and ingress for a demo service (from 'Kubernetes in Action') to an AWS EKS cluster in which the traefik ingress controller has been Helm installed.
I am able to access the traefik dashboard from the traefik.example.com hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file.
If I describe the service and ingress of the traefik-dashboard:
$ kubectl describe svc -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.52.6
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.164.81
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 172.31.27.70:8080
Session Affinity: None
Events: <none>
$ kubectl describe ing -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
traefik.example.com
traefik-dashboard:80 (172.31.27.70:8080)
Annotations:
Events: <none>
The service and ingress controller seem to be using the running traefik-575cc584fb-v4mfn pod in the kube-system namespace.
Given this info and looking at the traefik docs, I try to expose a demo service through its ingress with the following YAML:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
---
apiVersion: v1
kind: Service
metadata:
name: kubia
namespace: default
spec:
selector:
app: traefik
release: traefik
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
After applying this, I am unable to access the kubia service from the kubia.int hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file. Instead, I get a Service Unavailable in the response. Describing the created resources shows some differing info.
$ kubectl describe svc kubia
Name: kubia
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"ports":[{"name":"web","por...
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.142.243
Port: web 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
$ kubectl describe ing kubia
Name: kubia
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
kubia.int
/ kubia:web (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"rules":[{"host":"kubia.int","http":{"paths":[{"backend":{"serviceName":"kubia","servicePort":"web"},"path":"/"}]}}]}}
Events: <none>
I also notice that the demo kubia service has no endpoints, and the corresponding ingress shows no available backends.
Another thing I notice is that the demo kubia service and ingress is in the default namespace, while the traefik-dashboard service and ingress are in the kube-system namespace.
Does anything jump out to anyone? Any suggestions on the best way to diagnose it?
Many thanks in advance!
It would seem that you are missing the kubernetes.io/ingress.class: traefik that tells your Traefik ingress controller to serve for that Ingress definition.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
If you look at the examples in the docs you can see that the only Ingress that doesn't have annotation is traefik-web-ui that points to the Traefik Web UI.
I'm trying to create a simple nginx service on GKE, but I'm running into strange problems.
Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do curl myservice:8080 inside of the pod and see the nginx home screen)
But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
After a while, this is what my ingress status looks like:
$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in default backend - 404.
Any ideas why?
The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The containers section for the nginx should also specify:
livenessProbe:
httpGet:
path: /
port: 80
The GET / on port 80 is the default configuration, and can be changed.