Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)
An example of what I'm trying to do can be found here but it's using GCE.
Thanks
Service with NodePort
Create a service with type NodePort, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 31000
- portocol: UDP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 32000
type: NodePort
The container image gcr.io/google_containers/proxy-to-service:v2 is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.
apiVersion: v1
kind: Pod
metadata:
name: dns-proxy
spec:
containers:
- name: proxy-udp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "udp", "53", "kube-dns.default", "1" ]
ports:
- name: udp
protocol: UDP
containerPort: 53
hostPort: 53
- name: proxy-tcp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "tcp", "53", "kube-dns.default" ]
ports:
- name: tcp
protocol: TCP
containerPort: 53
hostPort: 53
Ingress
If there are multiple services sharing same TCP port with different hosts/paths, deploy the NGINX Ingress Controller, which listening on HTTP 80 and HTTPS 443.
Create an ingress, forward the traffic to specified services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
If I was going to do this on my home network I'd do it like this:
Configure 2 port forwarding rules on my router, to redirect traffic to an nginx box acting as a L4 load balancer.
So if my router IP was 1.2.3.4 and my custom L4 nginx LB was 192.168.1.200
Then I'd tell my router to port forward:
1.2.3.4:80 --> 192.168.1.200:80
1.2.3.4:443 --> 192.168.1.200:443
I'd follow this https://kubernetes.github.io/ingress-nginx/deploy/
and deploy most of what's in the generic cloud ingress controller (That should create an ingress controller pod, an L7 Nginx LB deployment and service in the cluster, and expose it on nodeports so you'd have a NodePort 32080 and 32443 (note they would actually be random, but this is easier to follow)) (Since you're working on bare metal I don't believe it'd be able to auto spawn and configure the L4 load balancer for you.)
I'd then Manually configure the L4 load balancer to load balance traffic coming in on port 80 ---> NodePort 32080
port 443 ---> NodePort 32443
So betweeen that big picture of what do do and the following link you should be good. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
(Btw this will let you continue to configure your ingress with the ingress controller)
Note: I plan to setup a bare metal cluster in my home closet in a few months so let me know how it goes!
If you have just one node deploy the ingress controller as a daemonset with host port 80. Do not deploy a service for it
If you have multiple nodes; with cloud providers a load balancer is a construct outside the cluster that's basically an HA proxy to each node running pods of your service on some port(s). You could do this kind of thing manually, for any service you want to expose set type to NodePort with some port in the allowed range (somewhere in the 30k) and spinup another VM with a TCP balancer (such as nginx) to all your nodes on that port. You'll be limited to running as many pods as you have nodes for that service
Related
I am trying to get a decent solution of exposing my services from a Kubernetes cluster hosted on local LXC containers.
The setup is as follows:
Host: Ubuntu 18.04 running a LXC cluster.
Inside the LXC there is a Charmed Distribution Of Kubernetes which is running my apps and another container running a NGINX reverse proxy.
I've also setup a Metallb load-balancer inside kubernetes and use all k8s services which needs internet exposing as LoadBalancer:
apiVersion: v1
kind: Service
metadata:
namespace: blazedesk
name: blazedesk-sdeweb-server
labels:
app: blazedesk
spec:
ports:
- port: 80
targetPort: 80
name: "http"
- port: 443
targetPort: 443
name: "https"
selector:
app: blazedesk
tier: sdeweb-server
type: LoadBalancer
How I did it so far was to redirect all http and https traffic coming to the main host to NXGINX reverse-proxy:
lxc config device add proxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 proxy_protocol=true
lxc config device add proxy myport443 proxy listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:443 proxy_protocol=true
Nginx is then configured to redirect traffic matching DNS addresses to k8s services external-ips:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/blazedesk-sdeweb-server LoadBalancer 10.152.183.215 10.190.26.240 80:31476/TCP,443:31055/TCP 17d
proxy_pass https://10.190.26.240;
As you can imagine, this setup implies lots of manual work, especially if the k8s services are restarted and new ips are allocated by metallb loadbalancer.
Is there a simpler way to redirect the traffic from the hosts directly to a kubernetes ingress, somehow bypassing LXC layer?
I actually made it work with an NGINX ingress controller exposed as a LoadBalancer service and redirect http and https traffic from host, using iptables, to the ingress external-ip.
Considering a very simple service.yaml file:
kind: Service
apiVersion: v1
metadata:
name: gateway-service
spec:
type: NodePort
selector:
app: gateway-app
ports:
- name: gateway-service
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080
We know that service will route all the requests to the pods with this label app=gateway-app at port 8080 (a.k.a. targetPort). There is another port field in the service definition, which is 80 in this case here. What is this port used for? When should we use it?
From the documentation, there is also this line:
By default the targetPort will be set to the same value as the port field.
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
In other words, when should we keep targetPort and port the same and when not?
In a nodePort service you can have 3 types of ports defined:
TargetPort:
As you mentioned in your question, this is the corresponding port to your pod and essentially the containerPorts you have defined in your replica manifest.
Port (servicePort):
This defines the port that other local resources can refer to. Quoting from the Kubernetes docs:
this Service will be visible [locally] as .spec.clusterIP:spec.ports[*].port
Meaning, this is not accessible publicly, however you can refer to your service port through other resources (within the cluster) with this port. An example is when you are creating an ingress for this service. In your ingress you will be required to present this port in the servicePort field:
...
backend:
serviceName: test
servicePort: 80
NodePort:
This is the port on your node which publicly exposes your service. Again quoting from the docs:
this Service will be visible [publicly] as [NodeIP]:spec.ports[*].nodePort
Port is what clients will connect to. TargetPort is what container is listening. One use case when they are not equal is when you run container under non-root user and cannot normally bind to port below 1024. In this case you can listen to 8080 but clients will still connect to 80 which might be simpler for them.
Service: This directs the traffic to a pod.
TargetPort: This is the actual port on which your application is running on the container.
Port: Some times your application inside container serves different services on a different port. Ex:- the actual application can run 8080 and health checks for this application can run on 8089 port of the container.
So if you hit the service without port it doesn't know to which port of the container it should redirect the request. Service needs to have a mapping so that it can hit the specific port of the container.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
nodePort: 32111
port: 8089
protocol: TCP
targetPort: 8080
- name: metrics
nodePort: 32353
port: 5555
protocol: TCP
targetPort: 5555
- name: health
nodePort: 31002
port: 8443
protocol: TCP
targetPort: 8085
if you hit the my-service:8089 the traffic is routed to 8080 of the container(targetPort). Similarly, if you hit my-service:8443 then it is redirected to 8085 of the container(targetPort).
But this myservice:8089 is internal to the kubernetes cluster and can be used when one application wants to communicate with another application. So to hit the service from outside the cluster someone needs to expose the port on the host machine on which kubernetes is running
so that the traffic is redirected to a port of the container. In that can use nodePort.
From the above example, you can hit the service from outside the cluster(Postman or any restclient) by host_ip:Nodeport
Say your host machine ip is 10.10.20.20 you can hit the http,metrics,health services by 10.10.20.20:32111,10.10.20.20:32353,10.10.20.20:31002
Let's take an example and try to understand with the help of a diagram.
Consider a cluster having 2 nodes and one service. Each nodes having 2 pods and each pod having 2 containers say app container and web container.
NodePort: 3001 (cluster level exposed port for each node)
Port: 80 (service port)
targetPort:8080 (app container port same should be mentioned in docker expose)
targetPort:80 (web container port same should be mentioned in docker expose)
Now the below diagram should help us understand it better.
reference: https://theithollow.com/2019/02/05/kubernetes-service-publishing/
Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.
I use Google Container Engine with Kubernetes.
I have created an https load balancer which terminates ssl and forwards traffic to k8s cluster nodes. The problem is I see no option to whitelist/filter incoming ip addresses. Is there any?
It sounds like you've set up a load balancer outside of Kubernetes. You may want to consider using a Kubernetes Service set to type: LoadBalancer. That type of service will give you an external IP that load balances to all of your Pods and can be easily restricted to whitelist IPs using the loadBalancerSourceRanges setting. Here is the example from the docs at https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
If you're using gce controller is not yet possible[1], just nginx controller[2] accept whitelist ip.
[1] https://github.com/kubernetes/ingress/issues/566
[2] https://github.com/kubernetes/ingress/blob/188c64aaac17ef29400e0f143b9aed7770e32fee/controllers/nginx/configuration.md#whitelist-source-range
I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.