I'm following the instructions here to spin up a single node master kubernetes install. And then planning to make a website hosted within it available via an nginx ingress controller hosted directly on the internet (on a physical server, not GCE, AWS or other cloud).
Set-up works as expected and I can hit the load balancer and flow through the ingress to the target echoheaders instance, get my output and everything looks great. Good stuff.
The trouble comes when I portscan the server's public internet IP and see all these open ports besides the ingress port (80).
Open TCP Port: 80 http
Open TCP Port: 4194
Open TCP Port: 6443
Open TCP Port: 8081
Open TCP Port: 10250
Open TCP Port: 10251
Open TCP Port: 10252
Open TCP Port: 10255
Open TCP Port: 38654
Open TCP Port: 38700
Open TCP Port: 39055
Open TCP Port: 39056
Open TCP Port: 44667
All of the extra ports correspond to cadvisor, skydns and the various echo headers and nginx instances, which for security reasons should not be bound to the public IP address of the server. All of these are being injected into the host's KUBE-PORTALS-HOST iptable with bindings to the server's public IP by kube-proxy.
How can I get hypercube to tell kube-proxy to only bind to docker IP (172.x) or private cluster IP (10.x) addresses?
You should be able to set the bind address on kube-proxy (http://kubernetes.io/docs/admin/kube-proxy/):
--bind-address=0.0.0.0: The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)
Related
I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080
I deployment a kubernetes cluster with 5 nodes: matser worker1 worker2 worker3 worker4.
And I create a deployment with 1 replica, it was arranged on worker4, expose port 7777
create a service:
apiVersion: v1
kind: Service
metadata:
name: service-test
spec:
type: NodePort
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 7777
nodePort: 31000
After create service, I send a request to worker4:31000/test ,it responses immediately.
But when I request other nodes on 31000, such as master:31100/test , worker1:31100/test. It has no response, and sometime it will response, but it cost such a long time.
when I use lsof to show port usage, it different
[root#worker4 ~]# lsof -i:31000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
wrapper 5251 root 5u IPv4 33957 0t0 TCP localhost:32000->localhost:31000 (ESTABLISHED)
java 5355 root 13u IPv6 35851 0t0 TCP localhost:31000->localhost:32000 (ESTABLISHED)
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-prox 9679 root 13u IPv6 3746350 0t0 TCP *:31000 (LISTEN)
so how can I visit nodePort service on other nodes.
Nodeport goes through extra network hop and uses IP table load balancing at L4 layer provided by kube proxy.So it's expected to be slow particularly if you access a pod from a a node where it's not scheduled. Also kube proxy need to be running in nodes from where you want to access a pod via nodeport service.
I would suggest to use a reverse proxy such as nginx as ingress or L7 load balancer for faster performance.
I have the following network policy for restricting access to a frontend service page:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: namespace-a
name: allow-frontend-access-from-external-ip
spec:
podSelector:
matchLabels:
app: frontend-service
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?
You might have a wrong understanding of the network policy(NP).
This is how you should interpret this section:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Open port 443 for outgoing traffic for all pods within 0.0.0.0/0 cidr.
The thing you are asking
how does Kubernetes know about that port, or is there a kind of port
mapping in the cluster so the traffic back to the client is on port
443 and gets mapped back to the clients original port when leaving the
cluster?
is managed by kube-proxy in following way:
For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.
Take a look at Kubernetes networking basics for a better understanding.
Considering a very simple service.yaml file:
kind: Service
apiVersion: v1
metadata:
name: gateway-service
spec:
type: NodePort
selector:
app: gateway-app
ports:
- name: gateway-service
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080
We know that service will route all the requests to the pods with this label app=gateway-app at port 8080 (a.k.a. targetPort). There is another port field in the service definition, which is 80 in this case here. What is this port used for? When should we use it?
From the documentation, there is also this line:
By default the targetPort will be set to the same value as the port field.
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
In other words, when should we keep targetPort and port the same and when not?
In a nodePort service you can have 3 types of ports defined:
TargetPort:
As you mentioned in your question, this is the corresponding port to your pod and essentially the containerPorts you have defined in your replica manifest.
Port (servicePort):
This defines the port that other local resources can refer to. Quoting from the Kubernetes docs:
this Service will be visible [locally] as .spec.clusterIP:spec.ports[*].port
Meaning, this is not accessible publicly, however you can refer to your service port through other resources (within the cluster) with this port. An example is when you are creating an ingress for this service. In your ingress you will be required to present this port in the servicePort field:
...
backend:
serviceName: test
servicePort: 80
NodePort:
This is the port on your node which publicly exposes your service. Again quoting from the docs:
this Service will be visible [publicly] as [NodeIP]:spec.ports[*].nodePort
Port is what clients will connect to. TargetPort is what container is listening. One use case when they are not equal is when you run container under non-root user and cannot normally bind to port below 1024. In this case you can listen to 8080 but clients will still connect to 80 which might be simpler for them.
Service: This directs the traffic to a pod.
TargetPort: This is the actual port on which your application is running on the container.
Port: Some times your application inside container serves different services on a different port. Ex:- the actual application can run 8080 and health checks for this application can run on 8089 port of the container.
So if you hit the service without port it doesn't know to which port of the container it should redirect the request. Service needs to have a mapping so that it can hit the specific port of the container.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
nodePort: 32111
port: 8089
protocol: TCP
targetPort: 8080
- name: metrics
nodePort: 32353
port: 5555
protocol: TCP
targetPort: 5555
- name: health
nodePort: 31002
port: 8443
protocol: TCP
targetPort: 8085
if you hit the my-service:8089 the traffic is routed to 8080 of the container(targetPort). Similarly, if you hit my-service:8443 then it is redirected to 8085 of the container(targetPort).
But this myservice:8089 is internal to the kubernetes cluster and can be used when one application wants to communicate with another application. So to hit the service from outside the cluster someone needs to expose the port on the host machine on which kubernetes is running
so that the traffic is redirected to a port of the container. In that can use nodePort.
From the above example, you can hit the service from outside the cluster(Postman or any restclient) by host_ip:Nodeport
Say your host machine ip is 10.10.20.20 you can hit the http,metrics,health services by 10.10.20.20:32111,10.10.20.20:32353,10.10.20.20:31002
Let's take an example and try to understand with the help of a diagram.
Consider a cluster having 2 nodes and one service. Each nodes having 2 pods and each pod having 2 containers say app container and web container.
NodePort: 3001 (cluster level exposed port for each node)
Port: 80 (service port)
targetPort:8080 (app container port same should be mentioned in docker expose)
targetPort:80 (web container port same should be mentioned in docker expose)
Now the below diagram should help us understand it better.
reference: https://theithollow.com/2019/02/05/kubernetes-service-publishing/
I used Kubernetes on Google cloud platform for run the thingsboard service by following this step : https://thingsboard.io/docs/user-guide/install/kubernetes/#tbyaml-file.
The problem is TB cannot receive the data when sent the data from NB-IoT Shield(BC95) by CoAP protocol on 5683 port. I have to see the Kubernetes configuration YAML in tb-service and found that 5683 port is defined by TCP protocol.
clusterIP: 10.23.242.112 externalTrafficPolicy: Cluster ports:
- name: ui
nodePort: 31146
port: 8080
protocol: TCP
targetPort: 8080
- name: mqtt
nodePort: 32758
port: 1883
protocol: TCP
targetPort: 1883
- name: coap
nodePort: 32343
port: 5683
protocol: TCP
targetPort: 5683
The question is the protocol of CoAP should be UDP or not?
CoAP, by itself, can be run both over TCP and UDP (indicated by coap+tcp:// or coap:// URIs, respectively). As the BC95 only supports UDP as far as I can tell, you're using the latter.
As by an example in a kubernetes issue, you may want to try setting the protocol family to UDP. There's use cases for both, which may be why there's a "TCP" in your setup (odd, though; the current examplein the docs does not have any "protocol: TCP" in it), but with this client you're probably using UDP.