Metallb Kubernetes Loadbalancer fails without the port - kubernetes

I'm using Metallb for a bare-metal kubernetes cluster.
I've modified the configmap as below,
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
#- name: default
#protocol: layer2
#addresses:
#- 192.168.1.240-192.168.1.250
- addresses:
- <Public_IP_01>/32
- <Public_IP_02>/32
- <Public_IP_03>/32
name: prod
protocol: layer2
And however the right ip is assigned, it's not reachable at http://<Public_IP_01>, but it's reachable through http://<Public_IP_01>:31158. I feel like it's working as nodeport not the load balancer
k get svc -A
default nginx-service LoadBalancer 10.98.4.122 <Public_IP_01> 80:31158/TCP 7m3s
Any ideas how to enforce traffic on port 80.
Thanks,

Related

Configure metallb with kubernetes running on different vps servers

I have a running Kubernetes cluster consists of 3 nodes and one mater running on a VPS server, each node and master has its own public IP and floating IP also assigned to it and all these IPs are different from other
I am trying to configure metallb as a load balancer for my Kubernetes cluster but I don't know how can I set the metalLb IPs to range in the configuration file
here are the IPs examples of my servers
115.203.150.255
94.217.238.58
46.12.5.65
76.47.79.44
as you can see here, each IP is different so how can I set the Ip ranges in metallb config map?
Here an example of a config map
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- PUBLIC_IP-PUBLIC_IP
In the Metallb documentation there is a mention you can use certain IPs metallb.universe.tf/address-pool. See here
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
The production-public-ips must be configured as showed here.
To configure MetalLB, you should create a configmap with your ips. Since you don't have the range, you can set /32 as subnet for your ips, like the example below.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 115.203.150.255/32
- 94.217.238.58/32
- 46.12.5.65/32
- 76.47.79.44/32
It should work for your scenario.
I have the same problem with VPS in different countries, but NGINX ingress controller bare metal considerations doesn't allow IP NODE as IP/32 range in "addresses".
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
MetalLB requires a pool of IP addresses in order to be able to take
ownership of the ingress-nginx Service. This pool can be defined in a
ConfigMap named config located in the same namespace as the MetalLB
controller. This pool of IPs must be dedicated to MetalLB's use, you
can't reuse the Kubernetes node IPs or IPs handed out by a DHCP
server.

MetalLB External IP to Internet

I can't access to public IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP.
I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing.
I used this manifest for installing nginx:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
It works, pods are running. I see the external IP adress after:
kubectl get services
From each node/host I can curl to that ip and port and I can get nginx's:
<h1>Welcome to nginx!</h1>
So far, so good. BUT:
What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.
What I'm missing?
Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?
I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue.
Thanks.
I am having the very same hardware layout:
a 3-Nodes Kubernetes Cluster - here with the 3 IPs:
| 123.223.149.27
| 22.36.211.68
| 192.77.11.164 |
running on (different) VPS-Providers (connected to a running cluster(via JOIN), of course)
Target: "expose" the nginx via metalLB, so I can access my web-app from outside the cluster via browser via the IP of one of my VPS'
Problem: I do not have a "range of IPs" I could declare for the metallb
Steps done:
create one .yaml file for the Loadbalancer, the kindservicetypeloadbalancer.yaml
create one .yaml file for the ConfigMap, containing the IPs of the 3 nodes, the kindconfigmap.yaml
``
### start of the kindservicetypeloadbalancer.yaml
### for ensuring a unique name: loadbalancer name nginxloady
apiVersion: v1
kind: Service
metadata:
name: nginxloady
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
``
below, the second .yaml file to be added to the Cluster:
# start of the kindconfigmap.yaml
## info: the "production-public-ips" can be found
## within the annotations-sector of the kind: Service type: loadbalancer / the kindservicetypeloadbalancer.yaml
## as well... ...namespace: metallb-system & protocol: layer2
## note: as you can see, I added a /32 after every of my node-IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
- 192.77.11.164/32
``
add the LoadBalancer:
kubectl apply -f kindservicetypeloadbalancer.yaml
add the ConfigMap:
kubectl apply -f kindconfigmap.yaml
Check the status of the namespace ( "n" ) metallb-system:
kubectl describe pods -n metallb-system
PS:
actually it is all there:
https://metallb.universe.tf/installation/
and here:
https://metallb.universe.tf/usage/#requesting-specific-ips
What you are missing is a routing policy
Your external IP addresses must belong to the same network as your nodes or instead of that you can add a route to your external address at your default gateway level and use a static NAT for each address

Kubernetes loadbalancer with dedicated servers

I have a problem with setting kubernetes loadbalancer/ingress(under port 80 for example).
I don't use it with any cloud, just VPS servers with only one IP per server.
I'm was trying install traefik but I don't get external-ip - it's stuck on pending.
I have read that I need something when simulating loadbalancer so I installed MetalLB but it more dedicated from local network not VPS servers and didn't work for me or I can't configure it.
My config-map for MetalLB:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- node1_ip
- node2_ip
- node3_ip
What I should do to on that cluster to be able to expose websites under normal port type 80, or can using reverse proxy like traefik.
You should not put node_ip addresses to MetalLB config file. You need to modify this to match the IP scheme of the network you are connected to with subnet. LoadBalancer IP addresses will be distributed from this range.
Something like below:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: metallb-system
protocol: layer2
addresses:
- 192.168.1.240/28

can i use ingress-nginx to simple route traffic?

I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.

Kubernetes node port can't expose successfully

I installed kubernetes cluster on my 3 virtualbox vms. 3 vms all run Ubuntu14.04 with ufw disabled. Kubernetes versin is 1.6. Here is my config files for creating pod and service.
Pod pod.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: frontend
image: hub.allinmoney.com/kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
Service service.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 31000
nodePort: 31000
selector:
name: frontend
I create service with type NodePort. When I run command kubectl create -f service.yaml, it outputs like below and I can't find the exposed port 31000 in any kube nodes:
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31000) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
Could anyone tell how to solve this or give me any tips?
As it says in the error message you need to set up firewall rules for your nodes to accept traffic on the node ports (default: 30000-32767).
Firewall rule example
Name: [firewall-rule-name]
Targets: [node-target-name, node-target2-name]
Source filters: IP ranges: 0.0.0.0/0
Protocols / ports: tcp:80,443,30000-32767
Action: Allow
Priority: 1000
Network: default
Your targetPort is also incorrect it needs to point to the corresponding port in the Pod (Port 80).