Strange behavior of the Istio Gateway port - kubernetes

I have a hard time understand how exactly is the Istio Gateway port used. I am referring to line 14 in the below example
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8169
name: http-test1
protocol: HTTP
hosts:
- '*'
From the Istio documentation:
The Port on which the proxy should listen for incoming connections. So
indeed if you apply the above yaml file and check the
istio-ingressgateway pod for listening TCP ports you will find that
the port 8169 is actually used (see below output)
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
tcp LISTEN 0 4096 0.0.0.0:8169 0.0.0.0:*
But here comes the tricky part. If before you apply the Gateway you change the istio-ingressgateway service as follow:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
...
- name: http5
nodePort: 31169
port: 8169
protocol: TCP
targetPort: 8069
...
And then you apply the Gateway then the actual port used is not 8169 but 8069. It seems like that the Gateway resource will check first for a matching port in the istio-ingressgateway service and use the targetPort of the service instead
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
<empty result>
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8069
tcp LISTEN 0 4096 0.0.0.0:8069 0.0.0.0:*
Can anybody explain why?
Thank you in advance for any help

You encountered an interesting aspect of Istio - how to configure Istio to expose a service outside of the service mesh using an Istio Gateway.
First of all, please note that the gateway configuration will be applied to the proxy running on a Pod (in your example on a Pod with labels istio: ingressgateway). Istio is responsible for configuring the proxy to listen on these ports, however it is the user's responsibility to ensure that external traffic to these ports are allowed into the mesh.
Let me show you with an example. What you encountered is expected behaviour, because that is exactly how Istio works.
First, I created a simple Gateway configuration (for the sake of simplicity I omit Virtual Service and Destination Rule configurations) like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9091
name: http-test-1
protocol: HTTP
hosts:
- '*'
Then:
$ kubectl apply -f gw.yaml
gateway.networking.istio.io/gateway created
Let's check if our proxy is listening on port 9091. We can check it directly from the istio-ingressgateway-* pod or we can use the istioctl proxy-config listener command to retrieve information about listener configuration for the Envoy instance in the specified Pod:
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
Exposing this port on the pod doesn't mean that we are able to reach it from the outside world, but it is possible to reach this port internally from another pod:
$ kubectl get pod -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP
istio-ingressgateway-8c48d875-lzsng 1/1 Running 0 43m 10.4.0.4
$ kubectl exec -it test -- curl 10.4.0.4:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
To make it accessible externally we need to expose this port on istio-ingressgateway Service:
...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9091
...
After this modification, we can reach port 9091 from the outside world:
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Please note that nothing has changed from Pod's perspective:
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
Now let's change the targetPort: 9091 to targetPort: 9092 in the istio-ingressgateway Service configuration and see what happens:
...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092 <--- "9091" to "9092"
...
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
As you can see, it seems that nothing has changed from the Pod's perspective so far, but we also need to re-apply the Gateway configuration:
$ kubectl delete -f gw.yaml && kubectl apply -f gw.yaml
gateway.networking.istio.io "gateway" deleted
gateway.networking.istio.io/gateway created
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9092
tcp LISTEN 0 1024 0.0.0.0:9092 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9092 ALL Route: http.9092
Our proxy is now listening on port 9092 (targetPort), but we can still reach port 9091 from the outisde as long as our Gateway specifies this port and it is open on the istio-ingressgateway Service.
$ kubectl describe gw gateway -n istio-system | grep -A 4 "Port"
Port:
Name: http-test-1
Number: 9091
Protocol: HTTP
$ kubectl get svc -n istio-system -oyaml | grep -C 2 9091
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

As far as I know, the gateways are virtual, you can define multiple gateways and expose different ports. However, these ports still need to be open on the istio-ingressgateway
So when you manually change the ports config on the actual ingressgateway, it makes sense that only that specific port is open after applying it. You are checking the open port on the ingressgateway and not on the virtual gateway.
Also, I don't think it is encouraged to directly edit the istio-ingressgateway service. If you want to customize the ingressgateway you can define a IstioOperator and apply it on installing Istio.

Related

port forwarding on microk8s on mac m1

I'm freshman on microk8s, and I'm trying out things by deploying a simple apache2 to see things working on my Mac M1:
◼ ~ $ microk8s kubectl run apache --image=ubuntu/apache2:2.4-22.04_beta --port=80
pod/apache created
◼ ~ $ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m37s
◼ ~ $ microk8s kubectl port-forward pod/apache 3000:80
Forwarding from 127.0.0.1:3000 -> 80
but:
◼ ~ $ curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000 after 5 ms: Connection refused
I've also tried to use a service:
◼ ~ $ microk8s kubectl expose pod apache --type=NodePort --port=4000 --target-port=80
service/apache exposed
◼ ~ $ curl http://localhost:4000
curl: (7) Failed to connect to localhost port 4000 after 3 ms: Connection refused
I guess I'm doing something wrong?
For some reason I haven't figured it out, if I port-forward right within the VM by opening a shell via multipass, it does work. Next, you simply have to point to the VM's IP:
within a VM's shell:
ubuntu#microk8s-vm:~$ sudo microk8s kubectl port-forward service/hellopg 8080:8080 --address="0.0.0.0"
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080
ubuntu#microk8s-vm:~$ ifconfig enp0s1
enp0s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.2 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fde3:1a04:ba31:1209:5054:ff:fea9:9cf4 prefixlen 64 scopeid 0x0<global>
from the host:
curl http://192.168.64.2:8080/hello 7
{"status": "how you doing?", "env_var":"¡hola mundo!"}
it works. I guess the command via microk8s is not executed properly within the machine? If anybody can explain this I'll update the question
Microk8’s acts the same as kuberentes. So, it's better to create a Service with NodePort. This would expose your apache.
apiVersion: v1
kind: Service
metadata:
name: my-apache
spec:
type: NodePort
selector:
app: apache
ports:
- port: 80
targetPort: 80
nodePort: 30004
Change the selector as per your requirement. For more detailed information to create NodePort service refer to this official document
You can use ingress as well. But in your case only for testing you can go with NodePort
I think the easiest way for you to test it would be adding: externalIPs to you service.
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 192.168.56.100 #your cluster IP
Happy coding!

Access kubernetes external IP from the internet

I am currently setting up a kubernetes cluster (bare ubuntu servers). I deployed metallb and ingress-nginx to handle the ip and service routing. This seems to work fine. I get a response from nginx, when I wget the externalIP of the ingress-nginx-controller service (works on every node). But this only works inside the cluster network. How do I access my services (the ingress-nginx-controller, because it does the routing) from the internet through a node/master servers ip? I tried to set up routing with iptables, but it doesn't seem to work. What am I doing wrong and is it the best practise ?
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i eth0 -p tcp -d <Servers IP> --dport 80 -j DNAT --to <ExternalIP of nginx>:80
iptables -A FORWARD -p tcp -d <ExternalIP of nginx> --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -F
Here are some more information:
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.219.111 198.51.100.1 80:31872/TCP,443:31897/TCP 41h
ingress-nginx-controller-admission ClusterIP 10.108.194.136 <none> 443/TCP 41h
Please share some thoughts
Jonas
Bare-metal cluster are a bit tricky to set-up because you need to create and manage the point of contact to your services. In cloud environment these are available on-demand.
I followed this doc and can assume that your load balancer seems to be working fine as you are able to curl this IP address. However, you are trying to get a response when calling a domain. For this you need some app running inside your cluster, which is exposed to hostname via Ingress component.
I'll take you through steps to achieve that.
First, create a Deployment to run a webservice, I'm gonna use simple nginx example:
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Second, create a Service of type LoadBalancer to be able to access it externally. You can do that by simply running this command:
kubectl expose deployment nginx-deployment --type=LoadBalancer --name=<service_name>
If your software load balancer is set up correctly, this should give external IP address to the Deployment you created before.
Last but not least, create Ingress service which will manage external access and name-based virtual hosting. Example:
kind: Ingress
metadata:
name: <ingress_name>
spec:
rules:
- host: <your_domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <service_name>
port:
number: 80
Now, you should be able to use your domain name as an external access to your cluster.
I ended up with installing HAProxy on the maschine I want to resole my domain to. HAProxy listens to port 80 and 443 and forwards all trafic to the externalIP of my ingress controller. You can also do this on multiple mashines and DNS failover for high availability.
My haproxy.cfg
frontend unsecure
bind 0.0.0.0:80
default_backend unsecure_pass
backend unsecure_pass
server unsecure_pass 198.51.100.0:80
frontend secure
bind 0.0.0.0:443
default_backend secure_pass
backend secure_pass
server secure_pass 198.51.100.0:443

Expose a web app on port 80 using Kubernetes and minikube

I'm studying Kubernetes using my laptop(no cloud). I started with minikube by following the docs at kubernetes(.)io. I'm not sure what I'm missing since I can only access my web app using high TCP port 32744 and not the standard port 80. I want to be able to access my web app using a web browser by visiting http://ipaddress.
Here is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: localregistry/nodeapp:1.0
ports:
- containerPort: 3000
$ minikube service webapp-deployment
|-----------|-------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------------|-------------|---------------------------|
| default | webapp-deployment | 3000 | http://192.168.64.2:32744 |
|-----------|-------------------|-------------|---------------------------|
This is just how Kubernetes works.
3000 is the port in your container and 32744 is the NodePort where your application got exposed. There are multiple reasons for this, one of them is that port 80 and 443 are standard reserved ports 🔒🔒 for web services and Kubernetes needs to be able to run many containers and services. Another reason is that ports 0-1024 are restricted to root only on *nix systems 🚫🚫.
If you really would like to serve it on port 80 on your local machine I would just set up something like Nginx and proxy the traffic to http://192.168.64.2:32744
# nginx.conf
...
location / {
proxy_set_header Accept-Encoding "";
proxy_pass http://192.168.64.2:32744;
}
...
Or you can do port-forward from a non-restricted local post as the other answer here suggested.
✌️
You can use kubectl port-forward for this.
$ kubectl port-forward -n <namespace> pod/mypod 8888:5000
What it does?
Listen on port 8888 locally, forwarding to port 5000 in the pod.
NB: You can also use port-forward with k8s services too using kubectl port-forward svc/<service-name> PORT:PORT.

configmap port forward doesn't work in kubernetes multicontainer pod

Below is configMap file for the pod containing multiple container.
Port number 80 is exposed to external world and it will then redirect to port 5000 of another container running in the pod.
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |-
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream webapp {
server 127.0.0.1:5000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
$kubectl apply -f confimap.yaml
The pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: mc3
labels:
app: mc3
spec:
containers:
- name: webapp
image: training/webapp
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Step 4. Identify port on the node that is forwarded to the Pod:
$ kubectl describe service mc3
Name: mc3
Namespace: default
Labels: app=mc3
Annotations: <none>
Selector: app=mc3
Type: NodePort
IP: 100.68.152.108
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32636/TCP
Endpoints: 100.96.2.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But i am unable to perform curl
$ curl 100.96.2.3:80
$ curl http://100.96.2.3:80
$ curl http://100.96.2.3:32636
So,i want to know why this redirection doesn't work.
Source: https://www.mirantis.co.jp/blog/multi-container-pods-and-container-communication-in-kubernetes/
Its written on the page that we can access using url
http://myhost:
Now,what is myhost here ?
and ,i understood that port exposed is 32636
But ,i am not able to access from browser or curl /wget command.
From what I see you're having trouble connecting with your application over the NodePort.
In the comments you posted:
I am executing on google cloud shell, so I assume you are running on GKE.
You also posted in comments:
XXXXX#cloudshell:~ (pubsub-quickstart-XXXXX)$ curl -v 10.59.242.245:31357 * Rebuilt URL to: 10.59.242.245:31357 * Trying 10.59.242.245... * TCP_NODELAY set * connect to 10.59.242.245 port 31357 failed: Connection timed out * Failed to connect to 10.59.242.245 port 31357: Connection timed out * Closing connection 0 curl: (7)`
So I see you are trying to curl private ip address of your cluster node from cloudshell
and that will not work.
It is impossible to connect to a node over private addresses from cloudshell
as these instances are in different networks (separated from each other).
To connect to your application from external network you need to use EXTERNAL-IP's of your nodes which can be found running kubectl get no -owide
Second thing (very important) is to create a firewall rule to allow ingress traffic to this port e.g. using gcloud cli:
gcloud compute firewall-rules create test-node-port --allow tcp:[NODE_PORT]
More information on exposing application on GKE can be found in GKE documentation here.
Let me know if that helped.

Istio (1.0) intra ReplicaSet routing - support traffic between pods in a Kubernetes Deployment

How does Istio support IP based routing between pods in the same Service (or ReplicaSet to be more specific)?
We would like to deploy a Tomcat application with replica > 1 within an Istio mesh. The app runs Infinispan, which is using JGroups to sort out communication and clustering. JGroups need to identify its cluster members and for that purpose there is the KUBE_PING (Kubernetes discovery protocol for JGroups). It will consult K8S API with a lookup comparable to kubectl get pods. The cluster members can be both pods in other services and pods within the same Service/Deployment.
Despite our issue being driven by rather specific needs the topic is generic. How do we enable pods to communicate with each other within a replica set?
Example: as a showcase we deploy the demo application https://github.com/jgroups-extras/jgroups-kubernetes. The relevant stuff is:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ispn-perf-test
namespace: my-non-istio-namespace
spec:
replicas: 3
< -- edited for brevity -- >
Running without Istio, the three pods will find each other and form the cluster. Deploying the same with Istio in my-istio-namespace and adding a basic Service definition:
kind: Service
apiVersion: v1
metadata:
name: ispn-perf-test-service
namespace: my-istio-namespace
spec:
selector:
run : ispn-perf-test
ports:
- protocol: TCP
port: 7800
targetPort: 7800
name: "one"
- protocol: TCP
port: 7900
targetPort: 7900
name: "two"
- protocol: TCP
port: 9000
targetPort: 9000
name: "three"
Note that output below is wide - you might need to scroll right to get the IPs
kubectl get pods -n my-istio-namespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ispn-perf-test-558666c5c6-g9jb5 2/2 Running 0 1d 10.44.4.63 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lbvqf 2/2 Running 0 1d 10.44.4.64 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lhrpb 2/2 Running 0 1d 10.44.3.22 gke-main-pool-4cpu-15gb-98b104f4-x8ln
kubectl get service ispn-perf-test-service -n my-istio-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ispn-perf-test-service ClusterIP 10.41.13.74 <none> 7800/TCP,7900/TCP,9000/TCP 1d
Guided by https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration, let's peek into the resulting Envoy conf of one of the pods:
istioctl proxy-config listeners ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace
ADDRESS PORT TYPE
10.44.4.63 7900 TCP
10.44.4.63 7800 TCP
10.44.4.63 9000 TCP
10.41.13.74 7900 TCP
10.41.13.74 9000 TCP
10.41.13.74 7800 TCP
< -- edited for brevity -- >
The Istio doc describes the listeners above as
Receives outbound non-HTTP traffic for relevant IP:PORT pair from
listener 0.0.0.0_15001
and this all makes sense. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the service via 10.41.13.74. But... what if the pod sends packets to 10.44.4.64 or 10.44.3.22? Those IPs are not present among the listeners so afaik the two "sibling" pods are non-reachable for ispn-perf-test-558666c5c6-g9jb5.
Can Istio support this today - then how?
You are right that HTTP routing only supports local access or remote access by service name or service VIP.
That said, for your particular example, above, where the service ports are named "one", "two", "three", the routing will be plain TCP as described here. Therefore, your example should work. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the other pods at 10.44.4.64 and 10.44.3.22.
If you rename the ports to "http-one", "http-two", and "http-three" then HTTP routing will kick in and the RDS config will restrict the remote calls to ones using recognized service domains.
To see the difference in the RDF config look at the output from the following command when the port is named "one", and when it is changed to "http-one".
istioctl proxy-config routes ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace --name 7800 -o json
With the port named "one" it will return no routes, so TCP routing will apply, but in the "http-one" case, the routes will be restricted.
I don't know if there is a way to add additional remote pod IP addresses to the RDS domains in the HTTP case. I would suggest opening an Istio issue, to see if it's possible.