I have the following network policy for restricting access to a frontend service page:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: namespace-a
name: allow-frontend-access-from-external-ip
spec:
podSelector:
matchLabels:
app: frontend-service
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?
You might have a wrong understanding of the network policy(NP).
This is how you should interpret this section:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Open port 443 for outgoing traffic for all pods within 0.0.0.0/0 cidr.
The thing you are asking
how does Kubernetes know about that port, or is there a kind of port
mapping in the cluster so the traffic back to the client is on port
443 and gets mapped back to the clients original port when leaving the
cluster?
is managed by kube-proxy in following way:
For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.
Take a look at Kubernetes networking basics for a better understanding.
Related
I have the following service:
apiVersion: v1
kind: Service
metadata:
name: hedgehog
labels:
run: hedgehog
spec:
ports:
- port: 3000
protocol: TCP
name: restful
- port: 8982
protocol: TCP
name: websocket
selector:
run: hedgehog
externalIPs:
- 1.2.4.120
In which I have specified an externalIP.
I'm also seeing this IP under EXTERNAL-IP when running kubectl get services.
However, when I do curl http://1.2.4.120:3000 I get a timeout. However the app is supposed to give me a response because the jar running inside the container in the deployment does respond to localhost:3000 requests when run locally.
if you see the type of your service might be cluster IP try changing the type to LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
clusterIP: 172.30.163.110
externalIPs:
- 192.168.132.253
externalTrafficPolicy: Cluster
ports:
- name: highport
nodePort: 31903
port: 30102
protocol: TCP
targetPort: 30102
selector:
app: web
sessionAffinity: None
type: LoadBalancer
something like this where type: LoadBalancer.
First of all you have to understand you cannot place any random address in your ExternalIP field. Those addresses are not managed by Kubernetes and are the responsibility of the cluster administrator or you. External IP addresses specified with externalIPs are different than the external IP address assigned to a service of type LoadBalancer by a cloud provider.
I checked the address that you mentioned in the question and it does not look like it belongs to you. That why I suspect that you placed a random one there.
The same address appears in this article about ExternalIP. As you can see here the address in this case are the IP addresses of the nodes that Kubernetes runs on.
This is potential issue in your case.
Another one is too verify if your application is listening on localhost or 0.0.0.0. If it's really localhost then this might be another potential problem for you. You can change where the server process is listening. You do this by listening on 0.0.0.0, which means “listen on all interfaces”.
Lastly please verify that your selector/ports of the services are correct and that you have at least one endpoint that backs your service.
What do i have working
I have a Kubernetes cluster as follow:
Single control plane (but plan to extend to 3 control plane for HA)
2 worker nodes
On this cluster i deployed (following this doc from traefik https://docs.traefik.io/user-guides/crd-acme/):
A deployment that create two pods :
traefik itself: which will be in charge of routing with exposed port 80, 8080
whoami:a simple http server thats responds to http requests
two services
traefik service:
whoami servic:
One traefik IngressRoute:
What do i want
I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new Traefik 2.x CDR ingress methods.
My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using IngressRoute Custom resource definitions
What's the problem
If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).
But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...
Also i've seed from the doc of the new igress CRD (https://docs.traefik.io/user-guides/crd-acme/) that:
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default
is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with kubectl.
I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8000
- protocol: TCP
name: admin
port: 8080
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 4443
selector:
app: traefik
have you tried to use tragetPort where every request comes on 80 redirect to 8000 but when you use port-forward you need to always use service instead of pod
You can try to use LoadBalancer service type for expose the Traefik service on ports 80, 443 and 8080. I've tested the yaml from the link you provided in GKE, and it's works.
You need to change the ports on 'traefik' service and add a 'LoadBalancer' as service type:
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80 <== Port to receive HTTP connections
- protocol: TCP
name: admin
port: 8080 <== Administration port
- protocol: TCP
name: websecure
port: 443 <== Port to receive HTTPS connections
selector:
app: traefik
type: LoadBalancer <== Define the type load balancer
Kubernetes will create a Loadbalancer for you service and you can access your application using ports 80 and 443.
$ curl https://35.111.XXX.XX/tls -k
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /tls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
$ curl http://35.111.XXX.XX/notls
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /notls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
Well after some time i've decided to put an haproxy in front of the kubernetes Cluster. It's seems to be the only solution ATM.
Considering a very simple service.yaml file:
kind: Service
apiVersion: v1
metadata:
name: gateway-service
spec:
type: NodePort
selector:
app: gateway-app
ports:
- name: gateway-service
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080
We know that service will route all the requests to the pods with this label app=gateway-app at port 8080 (a.k.a. targetPort). There is another port field in the service definition, which is 80 in this case here. What is this port used for? When should we use it?
From the documentation, there is also this line:
By default the targetPort will be set to the same value as the port field.
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
In other words, when should we keep targetPort and port the same and when not?
In a nodePort service you can have 3 types of ports defined:
TargetPort:
As you mentioned in your question, this is the corresponding port to your pod and essentially the containerPorts you have defined in your replica manifest.
Port (servicePort):
This defines the port that other local resources can refer to. Quoting from the Kubernetes docs:
this Service will be visible [locally] as .spec.clusterIP:spec.ports[*].port
Meaning, this is not accessible publicly, however you can refer to your service port through other resources (within the cluster) with this port. An example is when you are creating an ingress for this service. In your ingress you will be required to present this port in the servicePort field:
...
backend:
serviceName: test
servicePort: 80
NodePort:
This is the port on your node which publicly exposes your service. Again quoting from the docs:
this Service will be visible [publicly] as [NodeIP]:spec.ports[*].nodePort
Port is what clients will connect to. TargetPort is what container is listening. One use case when they are not equal is when you run container under non-root user and cannot normally bind to port below 1024. In this case you can listen to 8080 but clients will still connect to 80 which might be simpler for them.
Service: This directs the traffic to a pod.
TargetPort: This is the actual port on which your application is running on the container.
Port: Some times your application inside container serves different services on a different port. Ex:- the actual application can run 8080 and health checks for this application can run on 8089 port of the container.
So if you hit the service without port it doesn't know to which port of the container it should redirect the request. Service needs to have a mapping so that it can hit the specific port of the container.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
nodePort: 32111
port: 8089
protocol: TCP
targetPort: 8080
- name: metrics
nodePort: 32353
port: 5555
protocol: TCP
targetPort: 5555
- name: health
nodePort: 31002
port: 8443
protocol: TCP
targetPort: 8085
if you hit the my-service:8089 the traffic is routed to 8080 of the container(targetPort). Similarly, if you hit my-service:8443 then it is redirected to 8085 of the container(targetPort).
But this myservice:8089 is internal to the kubernetes cluster and can be used when one application wants to communicate with another application. So to hit the service from outside the cluster someone needs to expose the port on the host machine on which kubernetes is running
so that the traffic is redirected to a port of the container. In that can use nodePort.
From the above example, you can hit the service from outside the cluster(Postman or any restclient) by host_ip:Nodeport
Say your host machine ip is 10.10.20.20 you can hit the http,metrics,health services by 10.10.20.20:32111,10.10.20.20:32353,10.10.20.20:31002
Let's take an example and try to understand with the help of a diagram.
Consider a cluster having 2 nodes and one service. Each nodes having 2 pods and each pod having 2 containers say app container and web container.
NodePort: 3001 (cluster level exposed port for each node)
Port: 80 (service port)
targetPort:8080 (app container port same should be mentioned in docker expose)
targetPort:80 (web container port same should be mentioned in docker expose)
Now the below diagram should help us understand it better.
reference: https://theithollow.com/2019/02/05/kubernetes-service-publishing/
Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)
An example of what I'm trying to do can be found here but it's using GCE.
Thanks
Service with NodePort
Create a service with type NodePort, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 31000
- portocol: UDP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 32000
type: NodePort
The container image gcr.io/google_containers/proxy-to-service:v2 is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.
apiVersion: v1
kind: Pod
metadata:
name: dns-proxy
spec:
containers:
- name: proxy-udp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "udp", "53", "kube-dns.default", "1" ]
ports:
- name: udp
protocol: UDP
containerPort: 53
hostPort: 53
- name: proxy-tcp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "tcp", "53", "kube-dns.default" ]
ports:
- name: tcp
protocol: TCP
containerPort: 53
hostPort: 53
Ingress
If there are multiple services sharing same TCP port with different hosts/paths, deploy the NGINX Ingress Controller, which listening on HTTP 80 and HTTPS 443.
Create an ingress, forward the traffic to specified services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
If I was going to do this on my home network I'd do it like this:
Configure 2 port forwarding rules on my router, to redirect traffic to an nginx box acting as a L4 load balancer.
So if my router IP was 1.2.3.4 and my custom L4 nginx LB was 192.168.1.200
Then I'd tell my router to port forward:
1.2.3.4:80 --> 192.168.1.200:80
1.2.3.4:443 --> 192.168.1.200:443
I'd follow this https://kubernetes.github.io/ingress-nginx/deploy/
and deploy most of what's in the generic cloud ingress controller (That should create an ingress controller pod, an L7 Nginx LB deployment and service in the cluster, and expose it on nodeports so you'd have a NodePort 32080 and 32443 (note they would actually be random, but this is easier to follow)) (Since you're working on bare metal I don't believe it'd be able to auto spawn and configure the L4 load balancer for you.)
I'd then Manually configure the L4 load balancer to load balance traffic coming in on port 80 ---> NodePort 32080
port 443 ---> NodePort 32443
So betweeen that big picture of what do do and the following link you should be good. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
(Btw this will let you continue to configure your ingress with the ingress controller)
Note: I plan to setup a bare metal cluster in my home closet in a few months so let me know how it goes!
If you have just one node deploy the ingress controller as a daemonset with host port 80. Do not deploy a service for it
If you have multiple nodes; with cloud providers a load balancer is a construct outside the cluster that's basically an HA proxy to each node running pods of your service on some port(s). You could do this kind of thing manually, for any service you want to expose set type to NodePort with some port in the allowed range (somewhere in the 30k) and spinup another VM with a TCP balancer (such as nginx) to all your nodes on that port. You'll be limited to running as many pods as you have nodes for that service
I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.