UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...
Related
I have Keyclock installed on my Kubernetes cluster.
Default ingress which Keycloak creates looks like this.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
route.openshift.io/termination: passthrough
creationTimestamp: "2022-11-09T13:08:00Z"
generation: 1
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
name: keycloak-kc-ingress
namespace: default
ownerReferences:
- apiVersion: k8s.keycloak.org/v2alpha1
blockOwnerDeletion: true
controller: true
kind: Keycloak
name: keycloak-kc
uid: 67a18d00-4bee-4587-b330-cdaf21b39084
resourceVersion: "155002"
uid: 87c2aff4-1489-4ba9-bdf6-9fe1a288c800
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: keycloak.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 10.0.0.3
After installing ingress-nginx and adding kubernetes.io/ingress.class=nginx annotation, everything works.
For some reasons, however, I need to use nginx-ingress.
My new ingress looks like this.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# route.openshift.io/termination: passthrough
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
# target: keycloak-kc-service
name: keycloak-kc-ingress
namespace: default
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: accounts.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- accounts.example.com
secretName: keycloak-tls-secret
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
Information for debugging
kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default keycloak-operator 2/2 2 2 141m
kube-system cilium-operator 1/1 1 1 148m
kube-system coredns 2/2 2 2 148m
kube-system konnectivity-agent 2/2 2 2 148m
kube-system metrics-server 2/2 2 2 148m
kubernetes-dashboard dashboard-metrics-scraper 2/2 2 2 148m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress 1/1 1 1 127m
olm catalog-operator 1/1 1 1 142m
olm olm-operator 1/1 1 1 142m
olm packageserver 2/2 2 2 142m
kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default keycloak-kc-discovery ClusterIP None <none> 7800/TCP 114m
default keycloak-kc-service ClusterIP 10.240.18.67 <none> 8443/TCP 114m
default keycloak-operator ClusterIP 10.240.24.103 <none> 80/TCP 141m
default kubernetes ClusterIP 10.240.16.1 <none> 443/TCP 149m
default postgres-db ClusterIP 10.240.18.157 <none> 5432/TCP 140m
kube-system hcloud-csi-controller-metrics ClusterIP 10.240.30.190 <none> 9189/TCP 149m
kube-system hcloud-csi-node-metrics ClusterIP 10.240.26.123 <none> 9189/TCP 149m
kube-system kube-dns ClusterIP 10.240.16.10 <none> 53/TCP,53/UDP 149m
kube-system metrics-server ClusterIP 10.240.31.184 <none> 443/TCP 149m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.240.25.29 <none> 8000/TCP 149m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress LoadBalancer 10.240.26.173 10.0.0.3,167.235.123.123,2a01:4f8:1c1f:6484::1 80:31670/TCP,443:30557/TCP 128m
olm operatorhubio-catalog ClusterIP 10.240.22.30 <none> 50051/TCP 142m
olm packageserver-service ClusterIP 10.240.23.246 <none>
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
I am trying to access kubernetes dashboard on my local PC through Ingress. The steps I've done so far are:
Install Nginx Ingress by:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
PS D:\dev\kubernetes-dashboard-ingress> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-7rzdl 0/1 Completed 0 148m
pod/ingress-nginx-admission-patch-295pf 0/1 Completed 0 148m
pod/ingress-nginx-controller-7fc74cf778-jz6ts 1/1 Running 0 148m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.106.183.115 localhost 80:30673/TCP,443:32591/TCP 148m
service/ingress-nginx-controller-admission ClusterIP 10.103.188.122 <none> 443/TCP 148m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 148m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7fc74cf778 1 1 1 148m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 16s 148m
job.batch/ingress-nginx-admission-patch 1/1 16s 148m
Install kubernetes dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
When I inspect kubernetes dashboard namespace, I notice that the service is running on port 443:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
So I created Ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
and after applying this rule:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get ingress -n kubernetes-dashboard -o wide
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard-ingress <none> my-dashboard.com localhost 80 121m
I just add the following entry in my windows host file:
127.0.0.1 my-dashboard.com
However, I am getting nothing when I tried to access the dashboard through my browser (http://my-dashboard.com). Have I missed anything?
I was following the tutorial here: https://www.youtube.com/watch?v=X48VuDVv0do. The tutorial was done using minikube - so the dashboard there was available on port 80. Whereas the one i installed directly from github above was available on port 443. Do I need to configure some certificate / secret? I noticed that a few stuffs were created in the Secret by kubernetes-dashboard:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get secret -n kubernetes-dashboard -o wide
NAME TYPE DATA AGE
default-token-97skl kubernetes.io/service-account-token 3 140m
kubernetes-dashboard-certs Opaque 0 140m
kubernetes-dashboard-csrf Opaque 1 140m
kubernetes-dashboard-key-holder Opaque 2 140m
kubernetes-dashboard-token-rwgs4 kubernetes.io/service-account-token 3 140m
and if i tried to describe Ingress:
PS D:\dev\kubernetes-dashboard-ingress> kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: dashboard-ingress
Namespace: kubernetes-dashboard
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
my-dashboard.com
/ kubernetes-dashboard:443 (10.1.0.106:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m4s (x10 over 144m) nginx-ingress-controller Scheduled for sync
I know I can access the dashboard using kubectl proxy - but I would like to test out Ingress (learning it). Thank you in advance!
I'm running the following:
Docker Desktop 3.2.2 (61853)
Engine: 20.10.5
Compose: 1.28.5
Kubernetes: v1.19.7
Your service name seems to be wrong:
You listed your services:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
In your ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-dashboard # <<< This line should be kubernetes-dashboard
port:
number: 443
Ok. Figured out the issue. My request (in chrome) went through the corporate proxy, and that did not forward the request further to my kubernetes cluster. After adding 'my-dashboard.com' to the no proxy list, I can access it through browser.
Thank you thomas for the pointer !
Hey it's been quite days struggling to make the sample book app running. I am new to istio and trying to get understand it. I followed this demo of an other way of setting up the bookinfo. I am using minikube in a virtualbox machine with docker as a driver. I set metalLB as a loadBalancer for ingress-gateway, here is the configmap i used for metalLB :
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: custom-ip-space
protocol: layer2
addresses:
- 192.168.49.2/28
the 192.168.49.2 is the result of the command: minikube ip
The ingressgateway yaml file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: productpage
port:
number: 9080
and the output command of kubectl get svc -n istio-system:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.111.105.179 <none> 3000/TCP 34m
istio-citadel ClusterIP 10.100.38.218 <none> 8060/TCP,15014/TCP 34m
istio-egressgateway ClusterIP 10.101.66.207 <none> 80/TCP,443/TCP,15443/TCP 34m
istio-galley ClusterIP 10.103.112.155 <none> 443/TCP,15014/TCP,9901/TCP 34m
istio-ingressgateway LoadBalancer 10.97.23.39 192.168.49.0 15020:32717/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32199/TCP,15030:30010/TCP,15031:30189/TCP,15032:31134/TCP,15443:30748/TCP 34m
istio-pilot ClusterIP 10.108.133.31 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 34m
istio-policy ClusterIP 10.100.74.207 <none> 9091/TCP,15004/TCP,15014/TCP 34m
istio-sidecar-injector ClusterIP 10.97.224.99 <none> 443/TCP,15014/TCP 34m
istio-telemetry ClusterIP 10.101.165.139 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 34m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 34m
jaeger-collector ClusterIP 10.111.188.83 <none> 14267/TCP,14268/TCP,14250/TCP 34m
jaeger-query ClusterIP 10.103.148.144 <none> 16686/TCP 34m
kiali ClusterIP 10.111.57.222 <none> 20001/TCP 34m
prometheus ClusterIP 10.107.204.95 <none> 9090/TCP 34m
tracing ClusterIP 10.104.88.173 <none> 80/TCP 34m
zipkin ClusterIP 10.111.162.93 <none> 9411/TCP 34m
and when trying to curl 192.168.49.0:80/productpage I am getting :
* Trying 192.168.49.0...
* TCP_NODELAY set
* Immediate connect fail for 192.168.49.0: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
myhost#k8s:~$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
and before setting up the metalLB, I was getting connection refused!
Any solution for this please ? as it's been 5 days struggling to fix it.
I followed the steps here and all steps are ok!
In my opinion, this is a problem with the MetalLB configuration.
You are trying to give MetalLB control over IPs from the 192.168.49.2/28 network.
We can calculate for 192.168.49.2/28 network: HostMin=192.168.49.1 and HostMax=192.168.49.14.
As we can see, your istio-ingressgateway LoadBalancer Service is assigned the address 192.168.49.0 and I think that is the cause of the problem.
I recommend changing from 192.168.49.2/28 to a range, such as 192.168.49.10-192.168.49.20.
I've created an example to illustrate you how your configuration can be changed.
As you can see, at the beginning I had the configuration exactly like you (I also couldn't connect to the server using the curl command):
$ kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.109.75.19 192.168.49.0
$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
First, I modified the config ConfigMap:
NOTE: I changed 192.168.49.2/28 to 192.168.49.10-192.168.49.20
$ kubectl edit cm config -n metallb-system
Then I restarted all the controller and speaker Pods to force MetalLB to use new config (see: Metallb ConfigMap update).
$ kubectl delete pod -n metallb-system --all
pod "controller-65db86ddc6-gf49h" deleted
pod "speaker-7l66v" deleted
After some time, we should see a new EXTERNAL-IP assigned to the istio-ingressgateway Service:
kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP AGE
istio-ingressgateway LoadBalancer 10.106.170.227 192.168.49.10
Finally, we can check if it works as expected:
$ curl 192.168.49.10:80/productpage
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
...
Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.
In kubernetes kubeadm cluster i deployed my app with :
kubectl run gmt-dpl --image=192.168.56.33:5000/img:gmt --port 8181
Expose the app with:
kubectl expose deployment gmt-dpl --name=gmt-svc --type=NodePort --port=443 --target-port=8181
My gmt-svc:
Name: gmt-svc
Namespace: default
Labels: run=gmt-dpl
Annotations: <none>
Selector: run=gmt-dpl
Type: NodePort
IP: 10.96.74.133
Port: <unset> 443/TCP
TargetPort: 8181/TCP
NodePort: <unset> 30723/TCP
Endpoints: 10.44.0.17:8181
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I'm able to access my app on https://noedip:30723
Following github doc i manage to install nginx- controller:
NAMESPACE NAME READY STATUS RESTARTS AGE
default gmt-dpl-84fb9cfd8d-vz2bw 1/1 Running 0 1h
ingress-nginx default-http-backend-55c6c69b88-c4k6c 1/1 Running 5 2d
ingress-nginx nginx-ingress-controller-9c7b694-szgjd 1/1 Running 8 2d
kube-system etcd-k8s-master 1/1 Running 5 2d
kube-system kube-apiserver-k8s-master 1/1 Running 1 4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 2d
kube-system kube-dns-6f4fd4bdf-px5f8 3/3 Running 12 2d
kube-system kube-proxy-jswfx 1/1 Running 8 2d
kube-system kube-proxy-n2chh 1/1 Running 4 2d
kube-system kube-scheduler-k8s-master 1/1 Running 7 2d
kube-system weave-net-4k8sp 2/2 Running 10 1d
kube-system weave-net-bqjzb 2/2 Running 19 1d
I exposed the nginx-controller:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-svc
namespace: ingress-nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: http
- port: 443
nodePort: 30000
name: https
- port: 18080
nodePort: 30002
name: status
selector:
app: ingress-nginx
And i created an ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
name: test-ingress
spec:
tls:
- hosts:
- kube.gmt.dz
secretName: foo-secret
rules:
- host: kube.gmt.dz
http:
paths:
- path: /*
backend:
serviceName: gmt-svc
servicePort: 443
Now while trying to access my app through the nginx-controller with https://ip_master:30000 i got default backend -404 and also with http://ip_master:30001 the same error knowing that my app is only accessible in https.
In ingress ressource i tried servicePort: 8181 but i got the same error.
Any ideas ?