port forwarding on microk8s on mac m1 - kubernetes

I'm freshman on microk8s, and I'm trying out things by deploying a simple apache2 to see things working on my Mac M1:
◼ ~ $ microk8s kubectl run apache --image=ubuntu/apache2:2.4-22.04_beta --port=80
pod/apache created
◼ ~ $ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m37s
◼ ~ $ microk8s kubectl port-forward pod/apache 3000:80
Forwarding from 127.0.0.1:3000 -> 80
but:
◼ ~ $ curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000 after 5 ms: Connection refused
I've also tried to use a service:
◼ ~ $ microk8s kubectl expose pod apache --type=NodePort --port=4000 --target-port=80
service/apache exposed
◼ ~ $ curl http://localhost:4000
curl: (7) Failed to connect to localhost port 4000 after 3 ms: Connection refused
I guess I'm doing something wrong?

For some reason I haven't figured it out, if I port-forward right within the VM by opening a shell via multipass, it does work. Next, you simply have to point to the VM's IP:
within a VM's shell:
ubuntu#microk8s-vm:~$ sudo microk8s kubectl port-forward service/hellopg 8080:8080 --address="0.0.0.0"
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080
ubuntu#microk8s-vm:~$ ifconfig enp0s1
enp0s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.2 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fde3:1a04:ba31:1209:5054:ff:fea9:9cf4 prefixlen 64 scopeid 0x0<global>
from the host:
curl http://192.168.64.2:8080/hello 7
{"status": "how you doing?", "env_var":"¡hola mundo!"}
it works. I guess the command via microk8s is not executed properly within the machine? If anybody can explain this I'll update the question

Microk8’s acts the same as kuberentes. So, it's better to create a Service with NodePort. This would expose your apache.
apiVersion: v1
kind: Service
metadata:
name: my-apache
spec:
type: NodePort
selector:
app: apache
ports:
- port: 80
targetPort: 80
nodePort: 30004
Change the selector as per your requirement. For more detailed information to create NodePort service refer to this official document
You can use ingress as well. But in your case only for testing you can go with NodePort

I think the easiest way for you to test it would be adding: externalIPs to you service.
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 192.168.56.100 #your cluster IP
Happy coding!

Related

Why sessionAffinity doesn't work on a headless service

I have the following headless service in my kubernetes cluster :
apiVersion: v1
kind: Service
metadata:
labels:
app: foobar
name: foobar
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foobar
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
Behind are running couple of pods managed by a statefulset.
Lets try to reach my pods individually :
Running an alpine pod to contact my pods :
> kubectl run alpine -it --tty --image=alpine -- sh
Adding curl to fetch webpage :
alpine#> add apk curl
I can curl into each of my pods :
alpine#> curl -s pod-1.foobar
hello from pod 1
alpine#> curl -s pod-2.foobar
hello from pod 2
It works just as expected.
Now I want to have a service that will loadbalance between my pods.
Let's try to use that same foobar service :
alpine#> curl -s foobar
hello from pod 1
alpine#> curl -s foobar
hello from pod 2
It works just well. At least almost : In my headless service, I have specified sessionAffinity. As soon as I run a curl to a pod, I should stick to it.
I've tried the exact same test with a normal service (not headless) and this time it works as expected. It load balances between pods at first run BUT then stick to the same pod afterwards.
Why sessionAffinity doesn't work on a headless service ?
The affinity capability is provided by kube-proxy, only connection establish thru the proxy can have the client IP "stick" to a particular pod for a period of time. In case of headless, your client is given a list of pod IP(s) and it is up to your client app. to select which IP to connect. Because the order of IP(s) in the list is not always the same, typical app. that always pick the first IP will result to connect to the backend pod randomly.

Strange behavior of the Istio Gateway port

I have a hard time understand how exactly is the Istio Gateway port used. I am referring to line 14 in the below example
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8169
name: http-test1
protocol: HTTP
hosts:
- '*'
From the Istio documentation:
The Port on which the proxy should listen for incoming connections. So
indeed if you apply the above yaml file and check the
istio-ingressgateway pod for listening TCP ports you will find that
the port 8169 is actually used (see below output)
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
tcp LISTEN 0 4096 0.0.0.0:8169 0.0.0.0:*
But here comes the tricky part. If before you apply the Gateway you change the istio-ingressgateway service as follow:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
...
- name: http5
nodePort: 31169
port: 8169
protocol: TCP
targetPort: 8069
...
And then you apply the Gateway then the actual port used is not 8169 but 8069. It seems like that the Gateway resource will check first for a matching port in the istio-ingressgateway service and use the targetPort of the service instead
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
<empty result>
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8069
tcp LISTEN 0 4096 0.0.0.0:8069 0.0.0.0:*
Can anybody explain why?
Thank you in advance for any help
You encountered an interesting aspect of Istio - how to configure Istio to expose a service outside of the service mesh using an Istio Gateway.
First of all, please note that the gateway configuration will be applied to the proxy running on a Pod (in your example on a Pod with labels istio: ingressgateway). Istio is responsible for configuring the proxy to listen on these ports, however it is the user's responsibility to ensure that external traffic to these ports are allowed into the mesh.
Let me show you with an example. What you encountered is expected behaviour, because that is exactly how Istio works.
First, I created a simple Gateway configuration (for the sake of simplicity I omit Virtual Service and Destination Rule configurations) like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9091
name: http-test-1
protocol: HTTP
hosts:
- '*'
Then:
$ kubectl apply -f gw.yaml
gateway.networking.istio.io/gateway created
Let's check if our proxy is listening on port 9091. We can check it directly from the istio-ingressgateway-* pod or we can use the istioctl proxy-config listener command to retrieve information about listener configuration for the Envoy instance in the specified Pod:
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
Exposing this port on the pod doesn't mean that we are able to reach it from the outside world, but it is possible to reach this port internally from another pod:
$ kubectl get pod -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP
istio-ingressgateway-8c48d875-lzsng 1/1 Running 0 43m 10.4.0.4
$ kubectl exec -it test -- curl 10.4.0.4:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
To make it accessible externally we need to expose this port on istio-ingressgateway Service:
...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9091
...
After this modification, we can reach port 9091 from the outside world:
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Please note that nothing has changed from Pod's perspective:
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
Now let's change the targetPort: 9091 to targetPort: 9092 in the istio-ingressgateway Service configuration and see what happens:
...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092 <--- "9091" to "9092"
...
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
As you can see, it seems that nothing has changed from the Pod's perspective so far, but we also need to re-apply the Gateway configuration:
$ kubectl delete -f gw.yaml && kubectl apply -f gw.yaml
gateway.networking.istio.io "gateway" deleted
gateway.networking.istio.io/gateway created
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9092
tcp LISTEN 0 1024 0.0.0.0:9092 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9092 ALL Route: http.9092
Our proxy is now listening on port 9092 (targetPort), but we can still reach port 9091 from the outisde as long as our Gateway specifies this port and it is open on the istio-ingressgateway Service.
$ kubectl describe gw gateway -n istio-system | grep -A 4 "Port"
Port:
Name: http-test-1
Number: 9091
Protocol: HTTP
$ kubectl get svc -n istio-system -oyaml | grep -C 2 9091
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
As far as I know, the gateways are virtual, you can define multiple gateways and expose different ports. However, these ports still need to be open on the istio-ingressgateway
So when you manually change the ports config on the actual ingressgateway, it makes sense that only that specific port is open after applying it. You are checking the open port on the ingressgateway and not on the virtual gateway.
Also, I don't think it is encouraged to directly edit the istio-ingressgateway service. If you want to customize the ingressgateway you can define a IstioOperator and apply it on installing Istio.

Access kubernetes external IP from the internet

I am currently setting up a kubernetes cluster (bare ubuntu servers). I deployed metallb and ingress-nginx to handle the ip and service routing. This seems to work fine. I get a response from nginx, when I wget the externalIP of the ingress-nginx-controller service (works on every node). But this only works inside the cluster network. How do I access my services (the ingress-nginx-controller, because it does the routing) from the internet through a node/master servers ip? I tried to set up routing with iptables, but it doesn't seem to work. What am I doing wrong and is it the best practise ?
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i eth0 -p tcp -d <Servers IP> --dport 80 -j DNAT --to <ExternalIP of nginx>:80
iptables -A FORWARD -p tcp -d <ExternalIP of nginx> --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -F
Here are some more information:
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.219.111 198.51.100.1 80:31872/TCP,443:31897/TCP 41h
ingress-nginx-controller-admission ClusterIP 10.108.194.136 <none> 443/TCP 41h
Please share some thoughts
Jonas
Bare-metal cluster are a bit tricky to set-up because you need to create and manage the point of contact to your services. In cloud environment these are available on-demand.
I followed this doc and can assume that your load balancer seems to be working fine as you are able to curl this IP address. However, you are trying to get a response when calling a domain. For this you need some app running inside your cluster, which is exposed to hostname via Ingress component.
I'll take you through steps to achieve that.
First, create a Deployment to run a webservice, I'm gonna use simple nginx example:
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Second, create a Service of type LoadBalancer to be able to access it externally. You can do that by simply running this command:
kubectl expose deployment nginx-deployment --type=LoadBalancer --name=<service_name>
If your software load balancer is set up correctly, this should give external IP address to the Deployment you created before.
Last but not least, create Ingress service which will manage external access and name-based virtual hosting. Example:
kind: Ingress
metadata:
name: <ingress_name>
spec:
rules:
- host: <your_domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <service_name>
port:
number: 80
Now, you should be able to use your domain name as an external access to your cluster.
I ended up with installing HAProxy on the maschine I want to resole my domain to. HAProxy listens to port 80 and 443 and forwards all trafic to the externalIP of my ingress controller. You can also do this on multiple mashines and DNS failover for high availability.
My haproxy.cfg
frontend unsecure
bind 0.0.0.0:80
default_backend unsecure_pass
backend unsecure_pass
server unsecure_pass 198.51.100.0:80
frontend secure
bind 0.0.0.0:443
default_backend secure_pass
backend secure_pass
server secure_pass 198.51.100.0:443

Connection Refused between Kubernetes pods in the same cluster

I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.

Enable remote acess to kubernetes pod on local installation

I'm also trying to expose a mysql server instance on a local kubernetes installation(1 master and one node, both on oracle linux) but I not being able to access to the pod.
The pod configuration is this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: docker.io/mariadb
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
And the service file:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
name: mysql
I can see that the pod is is running:
# kubectl get pod mysql
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 3d
And the service is connected to an endpoint:
# kubectl describe service mysql
Name: mysql
Namespace: default
Labels: name=mysql
Selector: name=mysql
Type: NodePort
IP: 10.254.200.20
Port: <unset> 3306/TCP
NodePort: <unset> 30306/TCP
Endpoints: 11.0.14.2:3306
Session Affinity: None
No events.
I can see on netstat that kube-proxy is listening on port 30306 for all incoming connections.
tcp6 6 0 :::30306 :::* LISTEN 53039/kube-proxy
But somehow I don't get a response from mysql even on the localhost.
# telnet localhost 30306
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Whereas a normal mysql installation responds with something of the following:
$ telnet [REDACTED] 3306
Trying [REDACTED]...
Connected to [REDACTED].
Escape character is '^]'.
N
[REDACTED]-log�gw&TS(gS�X]G/Q,(#uIJwmysql_native_password^]
Notice the mysql part in the last line.
On a final note there is this kubectl output:
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 9d
mysql 10.254.200.20 nodes 3306/TCP 1h
But I don't understand what "nodes" mean in the EXTERNAL-IP column.
So what I want to happen is to open the access to the mysql service through the master IP(preferrably). How do I do that and what am I doing wrong?
I'm still not sure how to make clients connect to a single server that transparently routes all connections to the minions.
-> To do this you need a load balancer, which unfortunately is not a default Kubernetes building bloc.
You need to set up a reverse proxy that will send the traffic to the minion, like a nginx pod and a service using hostPort: <port> that will bind the port to the host. That means the pod needs to stay on that node, and to do that you would want to use a DaemonSet that uses the node name as selector for example.
Obviously, this is not very fault tolerant, so you can setup multiple reverse proxies and use DNS round robin resolution to forward traffic to one of the proxy pods.
Somewhere, at some point, you need a fixed IP to talk to your service over the internet, so you need to insure there is a static pod somewhere to handle that.
The NodePort is exposed on each Node in your cluster via the kube-proxy service. To connect, use the IP of that host (Node01) to connect to:
telnet [IpOfNode] 30306