Should Kubernetes host nodes be able to access services running in the cluster? - kubernetes

I am running:
Kubernetes v1.19.7 (On-premise, VMs. Provisioned via Kubespray)
MetalLB
Calico
nginx-ingress
Summary: Services are refusing to respond when queried from the host nodes. Is this even supposed to work? If not I can stop banging my head against this particular wall...
I am able to access service.foo.com from anywhere on my local network, however if I try to use something like cURL to make a request to service.foo.com from any of the host nodes I get "Connection refused" errors (but I can ping the service with no issue). I get the same behavior from within any pod running on the k8s cluster.
This is making things particularly difficult since I'm trying to set up and OIDC provider to use for gating access to the k8s dashboard, and host node needs to be able to query the provider.
Network Setup:
kube service addresses: 10.233.0.0/18
pods cidr: 10.233.64.0/18
MetalLB config:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.31.75-172.16.31.79
Ingress Controller Service described
Name: foo-com-ic-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=foo-com-ic
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=foo-com-ic-nginx-ingress
helm.sh/chart=nginx-ingress-0.8.0
Annotations: <none>
Selector: app=foo-com-ic-nginx-ingress
Type: LoadBalancer
IP Families: <none>
IP: 10.233.48.18
IPs: <none>
IP: 172.16.31.76
LoadBalancer Ingress: 172.16.31.76
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31445/TCP
Endpoints: 10.233.105.18:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31173/TCP
Endpoints: 10.233.105.18:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30406
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 9m4s (x4 over 43m) metallb-speaker announcing from node "node4"
Service Ingress described
Name: my-service
Namespace: default
Address: 172.16.31.76
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes service.foo.com
Rules:
Host Path Backends
---- ---- --------
service.foo.com / my-service:80 (10.233.96.27:80)
Annotations: kubernetes.io/ingress.class: service.com
meta.helm.sh/release-name: my-service
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 46m (x2 over 46m) nginx-ingress-controller Configuration for default/my-service was added or updated

Just in case someone comes across this issue while researching their own, I was eventually able to work around this. By chance I noticed that I could cURL the service from the node that the ingress controller pod was running on.
My work-around then was to change my ingress controller's installation kind from "deployment" to "daemonset". Now that the ingress controller pod runs on every node I am able to access the service from every node in the cluster.

Related

Manually created EndpointSlice resource not associated with Service

I'm trying to create a Service on cluster A that points to the IP address of cluster B. I do not have a domain name for cluster B, so can't use ExternalName. The way that I'm trying to do this is by creating a Service without a selector on cluster A and manually creating an EndpointSlice resource for that service which will point to cluster B. According to Kubernetes documentation, I need to "link an EndpointSlice to a Service by setting the kubernetes.io/service-name label on that EndpointSlice." But even after doing so, my service apparently has no endpoints.
Code
endpointslice.yaml
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: hack-svc-1
labels:
kubernetes.io/service-name: hack-svc
kubernetes.io/managed-by: manual
addressType: IPv4
ports:
- port: 80
endpoints:
- addresses:
- "cluster B's IPv4 address here"
conditions:
ready: true
service.yaml
apiVersion: v1
kind: Service
metadata:
name: hack-svc
spec:
ports:
- port: 80
After kubectl describe service hack-svc:
Name: hack-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: <IPv4 address here>
IPs: <IPv4 address here>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: <none> <-- No endpoints??
Session Affinity: None
Events: <none>
How can I associate the EndpointSlice with my Service?
EndpointSlice API is a scalable and extensible alternative to the Endpoints API. EndpointSlices gathers information such as IP addresses, ports, readiness, and topology from the pods of a service. Follow this tutorial and verify whether there are any mismatches while configuring EndpointSlices for your clusters it helped in my case.

Access kubernetes-dashboard using ingess ( 404 Not Found )

I'm relatively new to k8s and was following an tutorial to get familiar with it. There was a example on exposing kubernetes-dashboard via ingress and I tried to try it.
Configured kubernetes-dashboard by running following. As per its documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
But different from the tutorial kubernetes-dashboard was exposed via port 443
service/dashboard-metrics-scraper ClusterIP 10.108.119.138 <none> 8000/TCP 50m
service/kubernetes-dashboard ClusterIP 10.100.58.17 <none> 443/TCP 50m
So I changed the ingress configuration yaml accordingly.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: ingress-dashboard
namespace: kubernetes-dashboard
spec:
rules:
- host: k8s-dashboard.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Then I describe the ingress and get the ip and added an entry in /etc/hosts for it
kubectl describe ingress ingress-dashboard -n kubernetes-dashboard
Name: ingress-dashboard
Labels: <none>
Namespace: kubernetes-dashboard
Address: 192.168.49.2
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
k8s-dashboard.com
/ kubernetes-dashboard:443 (172.17.0.6:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 24m (x2 over 25m) nginx-ingress-controller Scheduled for sync
/etc/hosts change
192.168.49.2 k8s-dashbaord.com
When tried to access k8s-dashbaord.com. I get a 404 Not Found from nginx. So it seems like ingress is running but it cannot reach the service.
The ip mapped to ingress rule seems to be wrong though. (172.17.0.6:8443). Because that is not the ip of the service.
What am I doing wrong here?
P.S
If I just to a proxy ( kubectl proxy ) and access dashboard it works fine.

Why can't load balancer connect to service in GKE?

I am deploying an application on GKE cluster and try to deploy a load balancer to make clients able to call this application.
My application spec is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: api
template:
metadata:
labels:
name: api
spec:
serviceAccountName: docker-sa
containers:
- name: api
image: zhaoyi0113/rancher-go-api
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: api
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
name: api
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
It listens on the port 8080 and a service open port 80 and use the targetPort 8080 to connect to the application.
And I have a ingress spec:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sidecar
namespace: default
spec:
defaultBackend:
service:
name: api
port:
number: 80
After deploy, I am able to see the ip address from kubectl get ingress. But when I send a request to the ip, I got 502 error.
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sidecar <none> * 107.178.245.193 80 28m
$ kubectl describe ingress sidecar
Name: sidecar
Labels: <none>
Namespace: default
Address: 107.178.245.193
Default backend: api:80 (10.0.1.14:8080)
Rules:
Host Path Backends
---- ---- --------
* * api:80 (10.0.1.14:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s1-5ae02eec-default-api-80-28d7bbec":"Unknown"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-krllp0c9-default-sidecar-9a9n4r5m
ingress.kubernetes.io/target-proxy: k8s2-tp-krllp0c9-default-sidecar-9a9n4r5m
ingress.kubernetes.io/url-map: k8s2-um-krllp0c9-default-sidecar-9a9n4r5m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 29m loadbalancer-controller UrlMap "k8s2-um-krllp0c9-default-sidecar-9a9n4r5m" created
Normal Sync 28m loadbalancer-controller TargetProxy "k8s2-tp-krllp0c9-default-sidecar-9a9n4r5m" created
Normal Sync 28m loadbalancer-controller ForwardingRule "k8s2-fr-krllp0c9-default-sidecar-9a9n4r5m" created
Normal IPChanged 28m loadbalancer-controller IP is now 107.178.245.193
Normal Sync 3m51s (x7 over 29m) loadbalancer-controller Scheduled for sync
Below is the curl error response:
$ curl -i http://107.178.245.193/health
HTTP/1.1 502 Bad Gateway
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 332
Date: Tue, 16 Aug 2022 10:40:31 GMT
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
When I describe the service api, I got below error:
$ kubectl describe service api
Name: api
Namespace: default
Labels: <none>
Annotations: cloud.google.com/neg: {"ingress": true}
cloud.google.com/neg-status: {"network_endpoint_groups":{"80":"k8s1-29362abf-default-api-80-f2f1248a"},"zones":["australia-southeast2-a"]}
field.cattle.io/publicEndpoints: [{"port":30084,"protocol":"TCP","serviceName":"default:api","allNodes":true}]
Selector: name=api
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.3.253.54
IPs: 10.3.253.54
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30084/TCP
Endpoints: 10.0.1.17:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AttachFailed 7s neg-controller Failed to Attach 2 network endpoint(s) (NEG "k8s1-29362abf-default-api-80-f2f1248a" in zone "australia-southeast2-a"): googleapi: Error 400: Invalid value for field 'resource.ipAddress': '10.0.1.18'. Specified IP address 10.0.1.18 doesn't belong to the (sub)network default or to the instance gke-gcp-cqrs-gcp-cqrs-node-pool-6b30ca5c-41q8., invalid
Warning RetryFailed 7s neg-controller Failed to retry NEG sync for "default/api-k8s1-29362abf-default-api-80-f2f1248a--/80-8080-GCE_VM_IP_PORT-L7": maximum retry exceeded
Does anyone know what could be the root course?
I created a new GKE cluster and tried setting up the same resources you are configuring. However, I used the following image for the container gcr.io/google-samples/hello-app:1.0. Everything else remains the same - leaving the gcp-setup.yaml file I used below for reference.
gcp-setup.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: api
template:
metadata:
labels:
name: api
spec:
containers:
- name: api
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
name: api
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sidecar
namespace: default
spec:
defaultBackend:
service:
name: api
port:
number: 80
There is also a small thing I had to change in your configuration, which is the annotation block - when I first tried to apply your configuration, I got the below error. Hence, I had to adjust the annotation entry to be annotations.
> kubectl apply -f gcp-setup.yaml
deployment.apps/api created
error: error validating "gcp-setup.yaml": error validating data: ValidationError(Service.metadata): unknown field "annotation" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
Afterwards, I was able to successfully provision all of the resources, and your configuration worked perfectly fine. It took around 3 minutes I believe for the Ingress resource to get an IP address assigned (masked as XX.XXX.XXX.XX below).
> kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7d6fdd9845-8dwqc 1/1 Running 0 7m13s
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 10.36.4.150 <none> 80:30142/TCP 7m1s
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 12m
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sidecar <none> * XX.XXX.XXX.XX 80 7m18s
> kubectl describe ingress
Name: sidecar
Namespace: default
Address: XX.XXX.XXX.XX
Default backend: api:80 (10.32.0.10:8080)
Rules:
Host Path Backends
---- ---- --------
* * api:80 (10.32.0.10:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s1-05f3ce8b-default-api-80-82dd4d72":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-9k4w4ytx-default-sidecar-9m5g4dex
ingress.kubernetes.io/target-proxy: k8s2-tp-9k4w4ytx-default-sidecar-9m5g4dex
ingress.kubernetes.io/url-map: k8s2-um-9k4w4ytx-default-sidecar-9m5g4dex
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m13s loadbalancer-controller UrlMap "k8s2-um-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal Sync 7m10s loadbalancer-controller TargetProxy "k8s2-tp-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal Sync 6m59s loadbalancer-controller ForwardingRule "k8s2-fr-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal IPChanged 6m59s loadbalancer-controller IP is now XX.XXX.XXX.XX
Normal Sync 28s (x6 over 8m3s) loadbalancer-controller Scheduled for sync
After the Ingress resource became healthy, I was able to navigate in my browser to the assigned IP address XX.XXX.XXX.XX and got a successful response back from the workload I deployed (gcr.io/google-samples/hello-app:1.0).
Browser Output
Hello, world!
Version: 1.0.0
Hostname: api-7d6fdd9845-8dwqc
As a conclusion, make sure to update your Service definition from metadata.annotation to metadata.annotations. It was the only change I had to do to make your configuration work. Furthermore, I recommend turning resource definition validation on to make sure that you catch such errors when defining new resources.
If the error still persists, I would recommend running kubectl describe ingress sidecar and analyze the output, assuming it is related to the Ingress resource definition.
EDIT1
To make sure that this is not a zone-related issue, I provisioned a VPC-native, Public cluster in the same zone that you are using (australia-southeast2-a). I then applied the same configuration, and it was successful, thus ruling out the zone-related topic.
Based on the additional information you included in the post, my best guess for some potential root causes for the Service error you're getting when running kubectl describe service would be:
Your GKE cluster is not VPC-native - I see this is a core requirement to be able to leverage NEG
Your GKE cluster has been provisioned as a Private cluster, and as a consequence, NEG tries to assign an IP address from the available Private subnet ranges. This would explain the 10.0.1.18 IP address that NEG tries to assign to the resource definition

Kubernetes service URL not responding to API call

I've been following multiple tutorials on how to deploy my (Spring Boot) api on Minikube. I already got it (user-service running on 8081) working in a docker container with an api gateway (port 8080) and eureka (port 8087), but for starters I just want it to run without those. Steps I took:
Push docker container or image (?) to docker hub, I don't know the proper term.
Create a deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: kwetter-service
spec:
type: LoadBalancer
selector:
app: kwetter
ports:
- protocol: TCP
port: 8080
targetPort: 8081
nodePort: 30070
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kwetter-deployment
labels:
app: kwetter
spec:
replicas: 1
selector:
matchLabels:
app: kwetter
template:
metadata:
labels:
app: kwetter
spec:
containers:
- name: user-api
image: cazhero/s6-kwetter-backend_user:latest
ports:
- containerPort: 8081 #is the port it runs on when I manually start it up
kubectl apply -f deployment.yaml
minikube service kwetter-service
It takes me to an empty site with url: http://192.168.49.2:30070 which I thought I could use to make API calls to, but apparently not. How do I make api calls to my application running on minikube?
Get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d4h
kwetter-service LoadBalancer 10.106.42.56 <pending> 8080:30070/TCP 4d
describe svc kwetter-service:
Name: kwetter-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kwetter
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.42.56
IPs: 10.106.42.56
Port: <unset> 8080/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30070/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 6s service-controller LoadBalancer -> NodePort
Made an Ingress in the yaml, used kubectl get ing:
NAME CLASS HOSTS ADDRESS PORTS AGE
kwetter-ingress <none> * 80 49m
To make some things clear:
You need to have pushed your docker image cazhero/s6-kwetter-backend_user:latest to docker hub, check that at https://hub.docker.com/, in your personal repository.
What's the output of minikube service kwetter-service, does it print the URL http://192.168.49.2:30070?
Make sure your pod is running correctly by the following minikube command:
# check pod status
minikube kubectl -- get pods
# if the pod is running, check its container logs
minikube kubectl -- logs po kwetter-deployment-xxxx-xxxx
I see that you are using LoadBalancer service, where a LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.
Check external IP
kubectl get svc
Use the external IP and the port number in this format to access the
application.
http://REPLACE_WITH_EXTERNAL_IP:8080
If you want to access the application using Nodeport (30070), use the Nodeport service instead of LoadBalancer service.
Refer to this documentation for more information on accessing applications through Nodeport and LoadBalancer services.

Set static external IP for my load balancer on GKE

I am trying to set up a static external IP for my load balancer on GKE but having no luck. Here is my Kubernetes service config file:
kind: Service
apiVersion: v1
metadata:
name: myAppService
spec:
selector:
app: myApp
ports:
- protocol: TCP
port: 3001
targetPort: 3001
type: LoadBalancer
loadBalancerIP: *********
This doesn't work. I expect to see my external IP as ********* but it just says pending:
➜ git:(master) kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ********* <none> 443/TCP 5m
myAppService ********* <pending> 3001:30126/TCP 5m
More details:
➜ git:(master) kubectl describe services
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: *********
Port: https 443/TCP
Endpoints: *********
Session Affinity: ClientIP
Events: <none>
Name: myAppService
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myApp
Type: LoadBalancer
IP: *********
Port: <unset> 3001/TCP
NodePort: <unset> 30126/TCP
Endpoints:
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 20s 7 service-controller Normal CreatingLoadBalancer Creating load balancer
5m 19s 7 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service default/myAppService: Cannot EnsureLoadBalancer() with no hosts
Any ideas?
I've encountered the same problem, but after reading the docs carefully, it turned out that I was just reserving the static IP incorrectly.
A service of type LoadBalancer creates a network load balancer, which is regional. Therefore, also the static IP address you reserve needs to be regional also (in the regoin of your cluster).
When I changed to this solution, everything worked fine for me...
This got me stuck as well, I hope someone finds this helpful.
In addition to what Dirk said, if you happen to reserve a global static IP address as oppose to a regional one; you need to use Ingres as describe here in documentation: Configuring Domain Names with Static IP Addresses specifically step 2b.
So basically you reserve the static ip gcloud compute addresses create helloweb-ip --global
and add an Ingres:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloweb
# this is where you you add your reserved ip
annotations:
kubernetes.io/ingress.global-static-ip-name: helloweb-ip
labels:
app: hello
spec:
backend:
serviceName: helloweb-backend
servicePort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: helloweb-backend
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
tier: web
ports:
- port: 8080
targetPort: 8080
The doc also describe how to assign a static ip if you choose type "LoadBalancer" under step 2a.