Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway - kubernetes

I am having problems trying to get communication between two services in a kubernetes cluster. We are using a kong ingress object as an 'api gateway' to reroute http
calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend.
In front of these two ClusterIP services sits an ingress controller to take external http(s) calls from our kubernetes cluster to launch the frontend service. This ingress is shown here:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: kong
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.***.*******.com << Obfuscated
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 80
The first service is called 'frontend-service', a simple Angular 9 frontend that allows me to type in http strings and submit those strings to the backend.
The manifest yaml file for this is shown below. Note that the image name is obfuscated for various reasons.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: kong
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: frontend
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: kong
name: frontend-service
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 80
targetPort: 80
protocol: TCP
The second service is a simple .NET Core 3.1 API interface that prints back some text when the controller is reached. The backend service is called 'dataapi' and has one simple Controller in it called ValuesController.
The manifest yaml file for this is shown below.
replicas: 1
selector:
matchLabels:
app: dataapi
template:
metadata:
labels:
app: dataapi
spec:
imagePullSecrets:
- name: regcred
containers:
- name: dataapi
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: dataapi
namespace: kong
labels:
app: dataapi
spec:
ports:
- port: 80
name: http
targetPort: 80
selector:
app: dataapi
We are using a kong ingress as a proxy to redirect incoming http calls to the dataapi service. This manifest file is shown below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kong-gateway
namespace: kong
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /dataapi
pathType: Prefix
backend:
service:
name: dataapi
port:
number: 80
Performing a 'kubectl get all' produces the following output:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dataapi-dbc8bbb69-mzmdc 1/1 Running 0 2d2h
pod/frontend-5d5ffcdfb7-kqxq9 1/1 Running 0 65m
pod/ingress-kong-56f8f44fd5-rwr9j 2/2 Running 0 6d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dataapi ClusterIP 10.128.72.137 <none> 80/TCP,443/TCP 2d2h
service/frontend-service ClusterIP 10.128.44.109 <none> 80/TCP 2d
service/kong-proxy LoadBalancer 10.128.246.165 XX.XX.XX.XX 80:31289/TCP,443:31202/TCP 6d
service/kong-validation-webhook ClusterIP 10.128.138.44 <none> 443/TCP 6d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dataapi 1/1 1 1 2d2h
deployment.apps/frontend 1/1 1 1 2d
deployment.apps/ingress-kong 1/1 1 1 6d
NAME DESIRED CURRENT READY AGE
replicaset.apps/dataapi-dbc8bbb69 1 1 1 2d2h
replicaset.apps/frontend-59bf9c75dc 0 0 0 25h
replicaset.apps/ingress-kong-56f8f44fd5 1 1 1 6d
and 'kubectl get ingresses' gives:
NAME CLASS HOSTS (Obfuscated)
ingress-nginx <none> ***.******.com,**.********.com,**.****.com,**.******.com + 1 more... xx.xx.xxx.xx 80 6d ADDRESS PORTS AGE
kong-gateway kong * xx.xx.xxx.xx 80 2d2h
From the frontend, the expectation is that constructing the http string:
http://kong-proxy/dataapi/api/values
will enter our 'values' controller in the backend and return the text string from that controller.
Both services are running on the same kubernetes cluster, here using Linode. Our thinking is that it is a 'within cluster' communication between two services both of type ClusterIP.
The error reported in the Chrome console is:
zone-evergreen.js:2828 GET http://kong-proxy/dataapi/api/values net::ERR_NAME_NOT_RESOLVED
Note that we had found a similar StackOverflow issue as ours and the suggestion in that result was to add 'default.svc.cluster.local' to the http string as follows:
http://kong-proxy.default.svc.cluster.local/dataapi/api/values
This did not work. We also substituted kong, which is the namespace of the service, for default like this:
http://kong-proxy.kong.svc.cluster.local/dataapi/api/values
yielding the same errors as above.
Is there a critical step I am missing? Any advice is greatly appreciated!
*************** UPDATE From Eric Gagnon's Response(s) **************
Again, thank you Eric for Responding. Here are what my colleague and I have tried per your suggestions
Pod dns misconfiguration: check if pod's first nameserver equals 'kube-dns' svc ip and if search start with kong.svc.cluster.local:
kubectl exec -i -t -n kong frontend-simple-deployment-7b8b9cfb44-f2shk -- cat /etc/resolv.conf
nameserver 10.128.0.10
search kong.svc.cluster.local svc.cluster.local cluster.local members.linode.com
options ndots:5
kubectl get -n kube-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.128.0.10 <none> 53/UDP,53/TCP,9153/TCP 55d
kubectl describe -n kube-system svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: lke.linode.com/caplke-version: v1.19.9-001
prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.128.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.2.4.10:9153,10.2.4.14:9153
Session Affinity: None
Events: <none>
App Not using pod dns: in Node, output dns.getServers() to console
I do not understand where and how to do this. We tried to add DNS directly inside our Angular frontend app, but we found out it is not possible to add this.
Kong-proxy doesn't like something: set logging debug, hit the app a bunch of times, and grep logs.
We have tried two tests here. First, our kong-proxy service reachable from an ingress controller. Note that this is not our simple frontend app. It is nothing more than a proxy that passes an http string to a public gateway we have set up. This does work. We have exposed this through as:
http://gateway.cwg.stratbore.com/test/api/test
["Successfully pinged Test controller!!"]
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
So this works.
But when we try and do it from a simple frontend interface running in the same cluster as our backend:
it does not work with the text shown in the text box. This command does not add anything new:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
The front end comes back with an error.
But if we do add this http text:
The kong-ingress pod is hit:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
10.2.4.11 - - [17/Apr/2021:16:55:50 +0000] "GET /test/api/test HTTP/1.1" 200 52 "http://app-basic.cwg.stratbore.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
but the frontend gets an error back.
So at this point, we have tried a lot of things to get our frontend app to successfully send an http to our backend and get a response back and we are unsuccessful. I have also tried various configurations of our nginx.conf file that is packaged with our frontend app but no luck there either.
I am about to package all of this up in a github project. Thanks.

Chris,
I haven't used linode or kong and don't know what your frontend actually does, so I'll just point out what I can see:
The simplest dns check is to curl (or ping, dig, etc.):
http://[dataapi's pod ip]:80 from a host node
http://[kong-proxy svc's internal ip]/dataapi/api/values from a host node (or another pod - see below)
default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".
you said 'using a kong ingress as a proxy to redirect incoming', just want to make sure you're proxying (not redirecting the client).
Is chrome just relaying its upstream error from frontend-service? An external client shouldn't be able to resolve the cluster's urls (unless you've joined your local machine to the cluster's network or done some other fancy trick). By default, dns only works within the cluster.
cluster dns generally follows [service name].[namespace name].svc.cluster.local. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.
is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?
If you don't have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip's directly:
apiVersion: v1
kind: Pod
metadata:
name: curl-test
namespace: kong
spec:
containers:
- name: curl-test
image: buildpack-deps
imagePullPolicy: Always
command:
- "curl"
- "-v"
- "http://dataapi:80/dataapi/api/values"
#nodeSelector:
# kubernetes.io/hostname: [a more different node's hostname]
The pod should attempt dns resolution from the cluster. So it should find dataapi's svc ip and curl port 80 path /dataapi/api/values. Service IP's are virtual so they aren't actually 'reachable'. Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.
once it completes, just check the logs: kubectl logs curl-test, and then delete it.
If this fails, the nature of the failure in the logs should tell you if it's a dns or link issue. If it works, then you probably don't have a cluster dns issue. But it's possible you have an inter-node communication issue. To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod. It's a manual method, but it's quick for troubleshooting. Just rinse and repeat as needed for other nodes.
Of course, it may not be any of this, but hopefully this helps troubleshoot.

After a lot of help from Eric G (thank you!) on this, and reading this previous StackOverflow, I finally solved the issue. As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.
As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service
- host: gateway.*******.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway-service
port:
number: 80
Then from our Angular frontend, we sent our HTTP requests as follows:
...
http.get<string>("http://gateway.*******.com/api/name_of_contoller');
...
And we were finally able to communicate with our backend service the way we wanted. Both frontend and backend in the same Kubernetes Cluster.

Related

cannot connect to minikube ip and NodePort service port - windows

I am trying to run an application locally on k8s but I am not able to reach it.
here is my deloyment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: listings
labels:
app: listings
spec:
replicas: 2
selector:
matchLabels:
app: listings
template:
metadata:
labels:
app: listings
spec:
containers:
- image: mydockerhub/listings:latest
name: listings
envFrom:
- secretRef:
name: listings-secret
- configMapRef:
name: listings-config
ports:
- containerPort: 8000
name: django-port
and it is my service
apiVersion: v1
kind: Service
metadata:
name: listings
labels:
app: listings
spec:
type: NodePort
selector:
app: listings
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.
When I run kubectl get svc -o wide I see
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
listings NodePort 10.107.77.231 <none> 8000:30036/TCP 28s app=listings
When I run kubectl get node -o wide I see
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12
and when I run minikube ip it shows 192.168.49.2
I try to open http://192.168.49.2:30036/health it is not opening This site can’t be reached
How should expose my application externally?
note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application
That is because your local and minikube are not in the same network segment,
you must do something more to access minikube service on windows.
First
$ minikube service list
That will show your service detail which include name, url, nodePort, targetPort.
Then
$ minikube service --url listings
It will open a port to listen on your windows machine that can forward the traffic to minikube node port.
Or you can use command kubectl port-forward to expose service on host port, like:
kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000
Then try with http://localhost:30036/health

istio unable to access kubernetes dashboard

I am trying to access the Kubernetes Dashboard through an Istio Gateway + Virtual Service.
However, all I get is 404 page not found when I try to access the dashboard with browser. Accessing the Dashboard through k8s NodePort or k8s LoadBalancer service works just as expected. The pod, however, complains in the logs about http: TLS handshake error from 127.0.0.6:52483: remote error: tls: bad certificate.
Running httpbin through Istio (as given in their documentation) works as expected, so Istio seem to be working fine as well.
I am using the official Kubernetes Dashboard YAML-s. I am giving the service below (with type: LoadBalancer added, although it doesn't seem to make a difference for Istio, although it allows me to access the Dashboard through a separate IP).
Just for the record, my k8s cluster is comprised of VirtualBox machines running MetalLB.
kubectl get services --all-namespaces returns the following:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
httpbin httpbin ClusterIP 10.100.186.188 <none> 8000/TCP 47h
istio-system istio-egressgateway ClusterIP 10.109.231.163 <none> 80/TCP,443/TCP 5d3h
istio-system istio-ingressgateway LoadBalancer 10.111.188.94 192.168.56.46 15021:31440/TCP,80:31647/TCP,443:32715/TCP 5d3h
istio-system istiod ClusterIP 10.104.236.247 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 5d3h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.101.131.136 <none> 8000/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service LoadBalancer 10.103.130.244 192.168.56.47 443:30041/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service-np NodePort 10.100.49.224 <none> 8443:30002/TCP 43h
If I try to access the LoadBalancer directly via the IP from above and through browser, I get the usual Kubernetes Dashboard login page. The browser url is https://192.168.56.47.
YAML-s:
istio-gateway.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubernetes-dashboard-gateway
namespace: kubernetes-dashboard
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "*"
istio-virtual-service.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kubernetes-dashboard-virtual-service
namespace: kubernetes-dashboard
spec:
hosts:
- "*"
gateways:
- kubernetes-dashboard-gateway
tls:
- match:
- sniHosts: ["*"]
route:
- destination:
host: kubernetes-dashboard-service
port:
number: 443
dashboard-service.yaml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-service
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
# - port: 8000
# targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
User suren has mentioned:
your gateway is listening 443. not 80
Yes, this could be a problem. You are trying to reach port 80, but you are exposing only port 443. Try to change your configuration or change your port during request.
See albo documentation about Deploy and Access the Kubernetes Dashboard.
Hm, I got it working with the configuration as above and with explicitly specifying a host in all places where I have previously placed a "*". I had to add that host in /etc/hosts to be able to access it in browser.
It seems that this last part was key, as well as specifying the sniHost in the Virtual Service. The other problems were mostly configuration issues with the TLS. Setting it to PASSTHROUGH seems to work, because it forces Istio to sort of forward the HTTPS request to the Kubernetes Dashboard, which is responsible for decrypting etc.

Kubernetes Ingress Flask Application

I have a simple demo Flask application that is deployed to kubernetes using minikube. I am able to access the app using the Services. But I am not able to connect using ingress.
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: services-app-service
spec:
selector:
app: services-app
type: ClusterIP
ports:
- protocol: TCP
port: 5000 # External connection
targetPort: 5000 # Internal connection
D:Path>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP None <none> 3306/TCP 120m
kubernetes ClusterIP 10.20.30.1 <none> 443/TCP 3h38m
services-app-service ClusterIP 10.20.30.40 <none> 5000/TCP 18m
I am able to access the app using minikube.
D:Path>minikube service services-app-service --url
* service default/services-app-service has no node port
* Starting tunnel for service services-app-service.
|-----------|----------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------------|-------------|------------------------|
| default | services-app-service | | http://127.0.0.1:50759 |
|-----------|----------------------|-------------|------------------------|
http://127.0.0.1:50759
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mydemo.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: services-app-service
port:
number: 5000
D:Path>kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
services-ingress <none> mydemo.info 192.168.40.1 80 15m
Are there any additional configuration required to access the app via ingress?
The ingress, and ingress-dns addons are currently only supported on Linux. Currently not supported on windows.
MoreInfo
Not Supported on Windows:
minikube version: v1.16.0
minikube version: v1.17.1
The issue is that you need to access it with a Host head of mydemo.info for that Ingress spec to work. You also need to confirm you have an Ingress Controller installed, usually ingress-nginx for new users but there are many options. Then you would look for the Ingress Controllers NodePort or LoadBalancer service and access through that.
I've been searching for ages. I confirm this doesn't work on MacOS either.
Using minikube tunnel is the only way I found.

Expose service on local kubernetes

I'm running a local kubernetes bundled with docker on Mac OS.
How can I expose a service, so that I can access the service via a browser on my Mac?
I've created:
a) deployment including apache httpd.
b) service via yaml:
apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
selector:
app: web
type: NodePort
ports:
- protocol: TCP
port: 80
externalIPs:
- 192.168.1.10 # Network IP of my Mac
My service looks like:
$ kubectl get service apaches
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apaches NodePort 10.102.106.158 192.168.1.10 80:31137/TCP 14m
I can locally access the service in my kubernetes cluster by wget $CLUSTER-IP
I tried to call http://192.168.1.10/ on my Mac, but it doesn't work.
This question deals to a similar issue. But the solution does not help, because I do not know which IP I can use.
Update
Thanks to Michael Hausenblas I worked out a solution using Ingress.
Nevertheless there are still some open questions:
What is the meaning of a service's externalIP? Why do I need an externalIP when I do not directly access a service from external?
What is the meaning of the service port 31137?
The kubernetes docs describe a method to [publish a service in minikube via NodePort][4]. Is this also possible with kubernetes bundled on docker?
There are several solutions to expose services in kubernetes:
http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/
Here are my solutions according to alesnosek for a local kubernetes bundled with docker:
1. hostNetwork
hostNetwork: true
Dirty (the host network should not be shared for security reasons) => I did not check this solution.
2. hostPort
hostPort: 8086
Does not apply to services => I did not check this solution.
3. NodePort
Expose the service by defining a nodePort:
apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: apache
4. LoadBalancer
EDIT
#MathObsessed posted the solution in his anwer.
5. Ingress
a. Install Ingress Controller
git clone https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes.git
kubectl apply -f nginx-ingress/namespaces/nginx-ingress.yaml -Rf nginx-ingress
b. Configure Ingress
kubectl apply -f apache-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apache-ingress
spec:
rules:
- host: localhost
http:
paths:
- path: /
backend:
serviceName: apaches
servicePort: 80
Now I can access my apache deployed with kubernetes by calling http://localhost/
Remarks for using local-dev-with-docker-for-mac-kubernetes
The repo simplifies the deployment of the offical ingress-nginx controller
For production use I would follow the official guide.
The repos ships with a tiny full featured ingress example. Very useful for getting quickly a working example application.
Further documentation
https://kubernetes.io/docs/concepts/services-networking/ingress
For those still looking for an answer. I've managed to achieve this by adding another Kube service just to expose my app to localhost calls (via browser or Postman):
kind: Service
apiVersion: v1
metadata:
name: apaches-published
spec:
ports:
- name: http
port: 8080
targetPort: 80
protocol: TCP
selector:
app: web
type: LoadBalancer
Try it now on: http://localhost:8080
Really simple example
METHOD1
$ kubectl create deployment nginx-dep --image=nginx --replicas=2
Get the pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dep-5c5477cb4-76t9q 1/1 Running 0 7h5m
nginx-dep-5c5477cb4-9g84j 1/1 Running 0 7h5m
Access the pod using kubectl port
$ kubectl port-forward nginx-dep-5c5477cb4-9g84j 8888:80
Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80
Now do a curl to the localhost:8888
$ curl -v http://localhost:8888
METHOD2
You can expose port 80 of the deployment (where the application is runnin i.e. nginx port)
via a NodePort
$ kubectl expose deployment nginx-dep --name=nginx-dep-svc --type=NodePort --port=80
Get the service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
nginx-dep-svc NodePort 10.110.80.21 <none> 80:31239/TCP 21m
Access the deployment using hte NodePort
$ curl http://localhost:31239
As already mentioned in Matthias Ms answer there are several ways.
As the offical Kubernetes documentation specifically describes using a Service with a type NodePort I wanted to describe the workflow.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.
Setup a Service with a type of NodePort
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: NodePort
Then you can check on which port the Service is exposed to via
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.103.218.215 <none> 9376:31040/TCP 52s
and access it via localhost using the exposed port. E.g.
curl http://localhost:31040

How to verify working Traefik installation?

I'm in the process of setting up Traefik on a Kubernetes cluster, but I can't get it to work, so I need some troubleshooting help. The first thing I would like to verify is that the basic installation is successful.
The guide I'm following is this one:
https://docs.traefik.io/user-guide/kubernetes/
But, I'm installing on a 3-machine cluster (Master + 2x Nodes).
I have setup RBAC and create a Deployment / Service for Traefik. The Pod is up and running:
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
traefik-ingress-controller-7cf98d69cf-n2trx 1/1 Running 0 1h
This is the Service:
$ kubectl get services --namespace kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik-ingress-service NodePort 10.107.17.76 <none> 80:30820/TCP,8080: 31362/TCP 1h
Should I be able to access the Traefik Web UI now?
I tried to access "http://192.168.1.11:31362" from a web browser and it behaves a bit strange. I get a "404 page not found" error in the browser window, but the address bar in the browser changes to: "http://192.168.1.11:31362/dashboard/". That tells me that something is responding at that address / port.
This is the result of a Curl to the same address:
$ curl http://192.168.1.11:31362/
Found.
Is this normal behaviour at this step in the process?
I have also tried to test with an Service / Ingress like this:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: homeautomationweb
ports:
- port: 80
targetPort: 31047
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: test.no
http:
paths:
- backend:
serviceName: test-service
servicePort: 80
I have a working web application running in the cluster exposed on a node port and is accessible outside the cluster at http://http://192.168.1.11:31047/.
The DNS name "test.no" is defined in /etc/hosts as 192.168.1.11
But, when I try to access http://test.no, I get:
"test.no refused to connect"
The details of what I'm doing and the exact content of the Kubernetes Yaml files can be found at the end of this article:
https://github.com/olavt/KubernetesRaspberryPI