ingress points to wrong port on a service - kubernetes

I have a Kubernetes service that exposes two ports as follows
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
And here is the ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: f5
virtual-server.f5.com/http-port: "80"
virtual-server.f5.com/ip: controller-default
virtual-server.f5.com/round-robin: round-robin
creationTimestamp: 2018-10-05T18:54:45Z
generation: 2
name: m-ingress
namespace: m-ns
resourceVersion: "39557812"
selfLink: /apis/extensions/v1beta1/namespaces/m-ns
uid: 20241db9-c8d0-11e8-9fac-0050568d4d4a
spec:
rules:
- host: www.myhost.com
http:
paths:
- backend:
serviceName: m-svc
servicePort: 8080
path: /first/path
- backend:
serviceName: m-svc
servicePort: 8080
path: /second/path
status:
loadBalancer:
ingress:
- ip: 172.31.74.89
But when I go to www.myhost.com/first/path I end up at the service that is listening on port 8888 of m-svc. What might be going on?
Another piece of information is that I am sharing a service between two ingresses that point to different ports on the same service, is this a problem? There is a different ingress port the port 8888 on this service which works fine
Also I am using an F5 controller
After a lot of time investigating this, it looks like the root cause is in the F5s, it looks like because the name of the backend (Kubernetes service) is the same, it only creates one entry in the pool and routes the requests to this backend and the one port that gets registered in the F5 policy. Is there a fix for this? A workaround is to create a unique service for each port but I dont want to make this change , is this possible at the F5 level?

From what I see you don't have a Selector field in your service. Without it, it will not forward to any backend or pod. What makes you think that it's going to port 8888? What's strange is that you have Endpoints in your service. Did you manually create them?
The service would have to be something like this:
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
Then in your deployment definition:
selector:
matchLabels:
app: my-application
Or in a pod:
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
app: my-application
You should also be able to describe your Endpoints:
$ kubectl describe endpoints m-svc
Name: m-svc
Namespace: default
Labels: app=my-application
Annotations: <none>
Subsets:
Addresses: x.x.x.x
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
first 8080 TCP
second 8081 TCP
Events: <none>

Your Service appears to be what is called a headless service: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services. This would explain why the Endpoints was created automatically.
Something is amiss because it should be impossible for your HTTP request to arrive at you pods without the .spec.selector populated.
I suggest deleting the Service you created and delete the Endpoints with the same name and then recreate the Service with type=ClusterIP and the spec.selector properly populated.

Related

AKS load balancer service accessible on wrong port

I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?
Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

Unable to forward traffic using NodePort

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

Ingress on GKE. Run specify service healthcheck

I try to use ingress for loadbalancer of 2 services on Google Kubernetes engine:
here is ingress config for this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
where web is some example service from google samples:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
But the second service is ETCD cluster with NodePort service:
---
apiVersion: v1
kind: Service
metadata:
name: etcd-np
spec:
ports:
- port: 2379
targetPort: 2379
selector:
app: etcd
type: NodePort
But only first ingress rule works properly i see in logs:
ingress.kubernetes.io/backends: {"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
I etcd-np works properly it is not a problem of etcd , i think that the problem is that etcd server answers with 404 on GET / request and some healthcheck on ingress level does not allow to use it .
Thats why i have 2 questions :
1 ) How can I provide healthcheck urls for each backend path on ingress
2 ) How can I debug such issues . What I see now is
kubectl describe ingress basic-ingress
Name: basic-ingress
Namespace: default
Address: 4.4.4.4
Default backend: default-http-backend:80 (10.52.6.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* web:8080 (10.52.8.10:8080)
/v2/keys etcd-np:2379 (10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/target-proxy: k8s-tp-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/url-map: k8s-um-default-basic-ingress--ebfd7339a961462d
Events: <none>
But it does not provide me any info about this incident
UP
kubectl describe svc etcd-np
Name: etcd-np
Namespace: default
Labels: <none>
Annotations: Selector: app=etcd
Type: NodePort
IP: 10.4.7.20
Port: <unset> 2379/TCP
TargetPort: 2379/TCP
NodePort: <unset> 30195/TCP
Endpoints: 10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
According to the doc.
A Service exposed through an Ingress must respond to health checks
from the load balancer. Any container that is the final destination of
load-balanced traffic must do one of the following to indicate that it
is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness
probe. The Service exposed through an Ingress must point to the same
container port on which the readiness probe is enabled.
For example, suppose a container specifies this readiness probe:
...
readinessProbe:
httpGet:
path: /healthy
Then if the handler for the container's /healthy path returns an
HTTP 200 status, the load balancer considers the container to be
alive and healthy.
Now since ETCD has a health endpoint at /health the readiness probe will look like
...
readinessProbe:
httpGet:
path: /health
This becomes a bit tricky if mTLS is enabled in ETCD. To avoid that check the docs.

How to deploy custom nginx app on kubernetes?

I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.

TLS in Ingress setup timeout

I deployed a basic service in my kubernetes cluster. I handle routing using this Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: owncloud
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: owncloud
servicePort: 80
I generated a kubernetes secret in default namespace using a generated key and certificate. I named it example-tls and add it to the Ingress config under spec:
tls:
- secretName: example-tls
hosts:
- example.com
When I GET the service using https (curl -k https://example.com), it times out:
curl: (7) Failed to connect to example.com port 443: Connection timed out
It works using http.
What's possibly wrong here?
Here is the describe output of the concerned ingress:
Name: owncloud
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
example.com
owncloud:80 (10.40.0.4:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"owncloud","namespace":"default"},"spec":{"rules":[{"host":"example.com","http":{"paths":[{"backend":{"serviceName":"owncloud","servicePort":80}}]}}]}}
Events: <none>
My Ingress Controller service:
$ kubectl describe service traefik-ingress-service -n kube-system
Name: traefik-ingress-service
Namespace: kube-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"port...
Selector: k8s-app=traefik-ingress-lb
Type: NodePort
IP: 10.100.230.143
Port: web 80/TCP
TargetPort: 80/TCP
NodePort: web 32001/TCP
Endpoints: 10.46.0.1:80
Port: admin 8080/TCP
TargetPort: 8080/TCP
NodePort: admin 30480/TCP
Endpoints: 10.46.0.1:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>