I deployed a basic service in my kubernetes cluster. I handle routing using this Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: owncloud
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: owncloud
servicePort: 80
I generated a kubernetes secret in default namespace using a generated key and certificate. I named it example-tls and add it to the Ingress config under spec:
tls:
- secretName: example-tls
hosts:
- example.com
When I GET the service using https (curl -k https://example.com), it times out:
curl: (7) Failed to connect to example.com port 443: Connection timed out
It works using http.
What's possibly wrong here?
Here is the describe output of the concerned ingress:
Name: owncloud
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
example.com
owncloud:80 (10.40.0.4:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"owncloud","namespace":"default"},"spec":{"rules":[{"host":"example.com","http":{"paths":[{"backend":{"serviceName":"owncloud","servicePort":80}}]}}]}}
Events: <none>
My Ingress Controller service:
$ kubectl describe service traefik-ingress-service -n kube-system
Name: traefik-ingress-service
Namespace: kube-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"port...
Selector: k8s-app=traefik-ingress-lb
Type: NodePort
IP: 10.100.230.143
Port: web 80/TCP
TargetPort: 80/TCP
NodePort: web 32001/TCP
Endpoints: 10.46.0.1:80
Port: admin 8080/TCP
TargetPort: 8080/TCP
NodePort: admin 30480/TCP
Endpoints: 10.46.0.1:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Related
I have Kubernetes cluster v1.19.16 set up in bare metal Ubuntu-18.04 server and currently i want to connect cluster jenkins service through http://jenkins.company.com. Haproxy server side frontend & backend already been configured.
My service.yaml file content as follows,
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
namespace: jenkins
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
type: ClusterIP
ports:
- protocol: TCP
port: 8080
targetPort: 80
ingress-resource.yaml file content as follows,
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: "jenkins.company.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: jenkins-svc
servicePort: 8080
# kubectl get service -n jenkins
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-svc ClusterIP 10.96.136.255 <none> 8080/TCP 20m
# kubectl get ing jenkins-ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress <none> jenkins.company.com 80 5h42m
# kubectl describe ingress -n jenkins
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: jenkins-ingress
Namespace: jenkins
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
jenkins.dpi.com
/ jenkins-svc:8080 (10.244.0.16:80)
Annotations: ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
Events: <none>
When i tried to access http://jenkins.company.com it shows below error message on browser.
Please let me know what i'm missing here?
The issue with the service port and container port. Jenkins default port is 8080, so I assume your service port is 80
ports:
- protocol: TCP
port: 80
targetPort: 8080
and ingress should be
spec:
rules:
- host: "jenkins.company.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: jenkins-svc
servicePort: 80
port: The port of this service
targetPort The target port on the pod(s) to forward traffic to
Difference between targetPort and port in Kubernetes Service definition
I would like to be able to reach main page located at /usr/share/nginx/html/index.html of my pod. I want to use URL http://myexternalclusterIP/web
Instead of redirecting to the main page my query tries to find the /web path inside the pod.
If I use DNS hostname, this all works fine, why doesn't it work with IP address?
My ingress config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-my
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /web
backend:
service:
name: newsite
port:
number: 1856
My svc config:
Name: newsite
Namespace: default
Labels: app=newsite
Annotations: <none>
Selector: app=newsite
Type: ClusterIP
IP: 10.108.204.71
Port: <unset> 1856/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.126:80
Session Affinity: None
Events: <none>
I can't access to my services via Traefik Ingress. When i request to machine host 192.168.1.2/elastisearch i receive a 404 response from Traefik.
When i inspect the ingress i get
elasticsearch-api-clusterip:9200 (<error: endpoints "elasticsearch-api-clusterip" not found>) but elasticsearch-api-clusterip endpoint exists
Name: elasticsearch-api-clusterip
Namespace: elastic
Labels: app=elasticsearch
Annotations: <none>
Selector: app=elasticsearch
Type: ClusterIP
IP: 10.108.147.198
Port: <unset> 9200/TCP
TargetPort: 9200/TCP
Endpoints: 10.244.0.44:9200,10.244.2.13:9200,10.244.4.15:9200
Session Affinity: None
Events: <none>```
This is my ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: "v-ingress"
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- path: /elasticsearch
backend:
serviceName: elasticsearch-api-clusterip
servicePort: 9200
- path: /traefik-ui
backend:
serviceName: traefik-web-ui
servicePort: web
The request to endpoint give me a response from the service.
Thank you for your helps
I try to use ingress for loadbalancer of 2 services on Google Kubernetes engine:
here is ingress config for this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
where web is some example service from google samples:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
But the second service is ETCD cluster with NodePort service:
---
apiVersion: v1
kind: Service
metadata:
name: etcd-np
spec:
ports:
- port: 2379
targetPort: 2379
selector:
app: etcd
type: NodePort
But only first ingress rule works properly i see in logs:
ingress.kubernetes.io/backends: {"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
I etcd-np works properly it is not a problem of etcd , i think that the problem is that etcd server answers with 404 on GET / request and some healthcheck on ingress level does not allow to use it .
Thats why i have 2 questions :
1 ) How can I provide healthcheck urls for each backend path on ingress
2 ) How can I debug such issues . What I see now is
kubectl describe ingress basic-ingress
Name: basic-ingress
Namespace: default
Address: 4.4.4.4
Default backend: default-http-backend:80 (10.52.6.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* web:8080 (10.52.8.10:8080)
/v2/keys etcd-np:2379 (10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/target-proxy: k8s-tp-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/url-map: k8s-um-default-basic-ingress--ebfd7339a961462d
Events: <none>
But it does not provide me any info about this incident
UP
kubectl describe svc etcd-np
Name: etcd-np
Namespace: default
Labels: <none>
Annotations: Selector: app=etcd
Type: NodePort
IP: 10.4.7.20
Port: <unset> 2379/TCP
TargetPort: 2379/TCP
NodePort: <unset> 30195/TCP
Endpoints: 10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
According to the doc.
A Service exposed through an Ingress must respond to health checks
from the load balancer. Any container that is the final destination of
load-balanced traffic must do one of the following to indicate that it
is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness
probe. The Service exposed through an Ingress must point to the same
container port on which the readiness probe is enabled.
For example, suppose a container specifies this readiness probe:
...
readinessProbe:
httpGet:
path: /healthy
Then if the handler for the container's /healthy path returns an
HTTP 200 status, the load balancer considers the container to be
alive and healthy.
Now since ETCD has a health endpoint at /health the readiness probe will look like
...
readinessProbe:
httpGet:
path: /health
This becomes a bit tricky if mTLS is enabled in ETCD. To avoid that check the docs.
I have a Kubernetes service that exposes two ports as follows
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
And here is the ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: f5
virtual-server.f5.com/http-port: "80"
virtual-server.f5.com/ip: controller-default
virtual-server.f5.com/round-robin: round-robin
creationTimestamp: 2018-10-05T18:54:45Z
generation: 2
name: m-ingress
namespace: m-ns
resourceVersion: "39557812"
selfLink: /apis/extensions/v1beta1/namespaces/m-ns
uid: 20241db9-c8d0-11e8-9fac-0050568d4d4a
spec:
rules:
- host: www.myhost.com
http:
paths:
- backend:
serviceName: m-svc
servicePort: 8080
path: /first/path
- backend:
serviceName: m-svc
servicePort: 8080
path: /second/path
status:
loadBalancer:
ingress:
- ip: 172.31.74.89
But when I go to www.myhost.com/first/path I end up at the service that is listening on port 8888 of m-svc. What might be going on?
Another piece of information is that I am sharing a service between two ingresses that point to different ports on the same service, is this a problem? There is a different ingress port the port 8888 on this service which works fine
Also I am using an F5 controller
After a lot of time investigating this, it looks like the root cause is in the F5s, it looks like because the name of the backend (Kubernetes service) is the same, it only creates one entry in the pool and routes the requests to this backend and the one port that gets registered in the F5 policy. Is there a fix for this? A workaround is to create a unique service for each port but I dont want to make this change , is this possible at the F5 level?
From what I see you don't have a Selector field in your service. Without it, it will not forward to any backend or pod. What makes you think that it's going to port 8888? What's strange is that you have Endpoints in your service. Did you manually create them?
The service would have to be something like this:
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
Then in your deployment definition:
selector:
matchLabels:
app: my-application
Or in a pod:
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
app: my-application
You should also be able to describe your Endpoints:
$ kubectl describe endpoints m-svc
Name: m-svc
Namespace: default
Labels: app=my-application
Annotations: <none>
Subsets:
Addresses: x.x.x.x
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
first 8080 TCP
second 8081 TCP
Events: <none>
Your Service appears to be what is called a headless service: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services. This would explain why the Endpoints was created automatically.
Something is amiss because it should be impossible for your HTTP request to arrive at you pods without the .spec.selector populated.
I suggest deleting the Service you created and delete the Endpoints with the same name and then recreate the Service with type=ClusterIP and the spec.selector properly populated.