Google Kubernetes Engine Ingress doesn't work - kubernetes

Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.

Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.

Related

Application not accessible using ingress but works with LoadBalancer GKE

I am trying to configure a hello world application using ingress in GKE. I have been referring a GCP official documentation to deploy an application using Ingress.
Deploying an app using ingress
But this does not work i have tried to refer several documents but none of those work. I have installed the ingress controller in my kubernetes cluster.
kubectl get svc -n ingress-nginx returns below output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
ingress-nginx-controller LoadBalancer 10.125.177.232 35.232.139.102 80:31835/TCP,443:31583/TCP 7h24m
kubectl get pods-n ingress-nginx returns
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-jj72r 0/1 Completed 0 7h24m
ingress-nginx-admission-patch-pktz6 0/1 Completed 0 7h24m
ingress-nginx-controller-5cb8d9c6dd-vptkh 1/1 Running 0 7h24m
kubectl get ingress returns below output
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-resource <none> 35.232.139.102.nip.io 34.69.2.173 80 7h48m
kubectl get pods returns below output
NAME READY STATUS RESTARTS AGE
hello-app-6d7bb985fd-x5qpn 1/1 Running 0 43m
kubect get svc returns below output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app ClusterIP 10.125.187.239 <none> 8080/TCP 43m
Ingress resource yml file used
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: 35.232.139.102.nip.io
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
Can someone tell me what i am doing wrong ? When i try to reach the application its not working.
So I have installed Ingress-controller and used ingress controller ip as the host in my ingress file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "35.232.139.102.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
Issue here was I forgot to add the IP from which I was accessing the application. When you create a GKE cluster there will be a firewall with the cluster-name-all in this firewall you will need to add your IP address of the machine from which you are trying to access the application. Also ensure that the port number is also exposed in my case both were not provided hence it was failing.

cannot connect to minikube ip and NodePort service port - windows

I am trying to run an application locally on k8s but I am not able to reach it.
here is my deloyment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: listings
labels:
app: listings
spec:
replicas: 2
selector:
matchLabels:
app: listings
template:
metadata:
labels:
app: listings
spec:
containers:
- image: mydockerhub/listings:latest
name: listings
envFrom:
- secretRef:
name: listings-secret
- configMapRef:
name: listings-config
ports:
- containerPort: 8000
name: django-port
and it is my service
apiVersion: v1
kind: Service
metadata:
name: listings
labels:
app: listings
spec:
type: NodePort
selector:
app: listings
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.
When I run kubectl get svc -o wide I see
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
listings NodePort 10.107.77.231 <none> 8000:30036/TCP 28s app=listings
When I run kubectl get node -o wide I see
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12
and when I run minikube ip it shows 192.168.49.2
I try to open http://192.168.49.2:30036/health it is not opening This site can’t be reached
How should expose my application externally?
note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application
That is because your local and minikube are not in the same network segment,
you must do something more to access minikube service on windows.
First
$ minikube service list
That will show your service detail which include name, url, nodePort, targetPort.
Then
$ minikube service --url listings
It will open a port to listen on your windows machine that can forward the traffic to minikube node port.
Or you can use command kubectl port-forward to expose service on host port, like:
kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000
Then try with http://localhost:30036/health

Kubernetes service URL not responding to API call

I've been following multiple tutorials on how to deploy my (Spring Boot) api on Minikube. I already got it (user-service running on 8081) working in a docker container with an api gateway (port 8080) and eureka (port 8087), but for starters I just want it to run without those. Steps I took:
Push docker container or image (?) to docker hub, I don't know the proper term.
Create a deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: kwetter-service
spec:
type: LoadBalancer
selector:
app: kwetter
ports:
- protocol: TCP
port: 8080
targetPort: 8081
nodePort: 30070
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kwetter-deployment
labels:
app: kwetter
spec:
replicas: 1
selector:
matchLabels:
app: kwetter
template:
metadata:
labels:
app: kwetter
spec:
containers:
- name: user-api
image: cazhero/s6-kwetter-backend_user:latest
ports:
- containerPort: 8081 #is the port it runs on when I manually start it up
kubectl apply -f deployment.yaml
minikube service kwetter-service
It takes me to an empty site with url: http://192.168.49.2:30070 which I thought I could use to make API calls to, but apparently not. How do I make api calls to my application running on minikube?
Get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d4h
kwetter-service LoadBalancer 10.106.42.56 <pending> 8080:30070/TCP 4d
describe svc kwetter-service:
Name: kwetter-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kwetter
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.42.56
IPs: 10.106.42.56
Port: <unset> 8080/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30070/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 6s service-controller LoadBalancer -> NodePort
Made an Ingress in the yaml, used kubectl get ing:
NAME CLASS HOSTS ADDRESS PORTS AGE
kwetter-ingress <none> * 80 49m
To make some things clear:
You need to have pushed your docker image cazhero/s6-kwetter-backend_user:latest to docker hub, check that at https://hub.docker.com/, in your personal repository.
What's the output of minikube service kwetter-service, does it print the URL http://192.168.49.2:30070?
Make sure your pod is running correctly by the following minikube command:
# check pod status
minikube kubectl -- get pods
# if the pod is running, check its container logs
minikube kubectl -- logs po kwetter-deployment-xxxx-xxxx
I see that you are using LoadBalancer service, where a LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.
Check external IP
kubectl get svc
Use the external IP and the port number in this format to access the
application.
http://REPLACE_WITH_EXTERNAL_IP:8080
If you want to access the application using Nodeport (30070), use the Nodeport service instead of LoadBalancer service.
Refer to this documentation for more information on accessing applications through Nodeport and LoadBalancer services.

istio unable to access kubernetes dashboard

I am trying to access the Kubernetes Dashboard through an Istio Gateway + Virtual Service.
However, all I get is 404 page not found when I try to access the dashboard with browser. Accessing the Dashboard through k8s NodePort or k8s LoadBalancer service works just as expected. The pod, however, complains in the logs about http: TLS handshake error from 127.0.0.6:52483: remote error: tls: bad certificate.
Running httpbin through Istio (as given in their documentation) works as expected, so Istio seem to be working fine as well.
I am using the official Kubernetes Dashboard YAML-s. I am giving the service below (with type: LoadBalancer added, although it doesn't seem to make a difference for Istio, although it allows me to access the Dashboard through a separate IP).
Just for the record, my k8s cluster is comprised of VirtualBox machines running MetalLB.
kubectl get services --all-namespaces returns the following:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
httpbin httpbin ClusterIP 10.100.186.188 <none> 8000/TCP 47h
istio-system istio-egressgateway ClusterIP 10.109.231.163 <none> 80/TCP,443/TCP 5d3h
istio-system istio-ingressgateway LoadBalancer 10.111.188.94 192.168.56.46 15021:31440/TCP,80:31647/TCP,443:32715/TCP 5d3h
istio-system istiod ClusterIP 10.104.236.247 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 5d3h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.101.131.136 <none> 8000/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service LoadBalancer 10.103.130.244 192.168.56.47 443:30041/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service-np NodePort 10.100.49.224 <none> 8443:30002/TCP 43h
If I try to access the LoadBalancer directly via the IP from above and through browser, I get the usual Kubernetes Dashboard login page. The browser url is https://192.168.56.47.
YAML-s:
istio-gateway.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubernetes-dashboard-gateway
namespace: kubernetes-dashboard
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "*"
istio-virtual-service.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kubernetes-dashboard-virtual-service
namespace: kubernetes-dashboard
spec:
hosts:
- "*"
gateways:
- kubernetes-dashboard-gateway
tls:
- match:
- sniHosts: ["*"]
route:
- destination:
host: kubernetes-dashboard-service
port:
number: 443
dashboard-service.yaml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-service
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
# - port: 8000
# targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
User suren has mentioned:
your gateway is listening 443. not 80
Yes, this could be a problem. You are trying to reach port 80, but you are exposing only port 443. Try to change your configuration or change your port during request.
See albo documentation about Deploy and Access the Kubernetes Dashboard.
Hm, I got it working with the configuration as above and with explicitly specifying a host in all places where I have previously placed a "*". I had to add that host in /etc/hosts to be able to access it in browser.
It seems that this last part was key, as well as specifying the sniHost in the Virtual Service. The other problems were mostly configuration issues with the TLS. Setting it to PASSTHROUGH seems to work, because it forces Istio to sort of forward the HTTPS request to the Kubernetes Dashboard, which is responsible for decrypting etc.

What is URL for Kibana UI

http://grs-preprodkubemaster01:5601/kibana
I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.
Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
However, this is showing me this error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
So, as another try I used Kibana service as NodePort, but that didn't work either.
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it
Use the NodePort you defined in your service:
https://10.123.24.107:30887
The most common way to expose internal server outside the cluster is an Ingress.
First, you need to have an Ingress controller running in your Kubernetes cluster.
There are two types of maintained Ingress controllers - GCE and nginx
Then, you need to create a yaml file as shown below and change it according to your needs:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
When you create it using kubectl create -f, you should see something like this:
$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
In this example, 1.2.3.4 is the IP allocated by Ingress controller.
When you have all things in place, you'll be able to access your application (Kibana) by IP 1.2.3.4
Please find more examples and use cases in Ingress documentation
You can also expose a Kubernetes service without using the Ingress resource:
Service.Type=LoadBalancer
Service.Type=NodePort
Port Proxy
I got it to work with these changes in ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
and my Kibana serive is setup as Nodeport
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
and dashboard is also configured as this
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
once you have the svc running you can access kibana using the NodePort from any node. Example: http://node01_ip: 31325/app/kibana
$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb