apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
labels:
app: metallb
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.100-192.168.99.250
Hello, I am using metallb in minikube (virtualbox). When configuring metallb's external-ip, you must set it according to the minikube ip range. But why does it work well even out of range?
This behavior is working as expected due to your Layer 2 configuration of your MetalLB:
Layer 2 mode is the simplest to configure: in many cases, you don’t
need any protocol-specific configuration, only IP addresses.
Layer 2 mode does not require the IPs to be bound to the network
interfaces of your worker nodes. It works by responding to ARP
requests on your local network directly, to give the machine’s MAC
address to clients.
In your ConfigMap we see:
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.100-192.168.99.250
It gives MetalLB control over IPs from 192.168.99.100 to 192.168.99.250 and configures the Layer 2 mode. Notice that your minikube IP which is 192.168.99.102 is in the range defined above and thus you can access it via your browser.
This mechanic is well explained in the MetalLB Layer 2 Configuration section of this guide:
MetalLB contains two pieces of information, a protocol and range of IP
addresses. In this configuration MetalLB is instructed to handout
addresses from the 192.168.99.95/105, its our predefined range with
respect to node IP. In our case to get IP of our minikube we use
minikube ip command and set range accordingly in config file.
I recommend going through the whole guide to get a better understanding of the whole minikube + MetalLB concept.
Related
I am running
kubectl get nodes -o yaml | grep ExternalIP -C 1
But am not finding any ExternalIP. There are various comments showing up about problems with non-cloud setups.
I am following this doc https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
with microk8s on a desktop.
If you setup k8s cluster on Cloud, Kubernetes will auto detect ExternalIP for you. ExternalIP will be a Load Balance IP address. But if you setup it on premise or on your Desktop. You can set External IP address by deploy your Load Balance, such as MetaLB.
You can get it here
In short:
From my answer Kubernetes Ingress nginx on Minikube fails.
By default all solutions like minikube does not provide you
LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for
you automatically by spinning in the background separate LB. Thats why
you see Pending status.
In your case most probably right decision to look into MicroK8s Add ons. There is a Add on: MetalLB:
Thanks #Matt with his MetalLB external load balancer on docker-desktop community edition on Windows 10 single-node Kubernetes Infrastructure answer ans researched info.
MetalLB Loadbalancer is a network LB implementation that tries to
“just work” on bare metal clusters.
When you enable this add on you will be asked for an IP address pool
that MetalLB will hand out IPs from:
microk8s enable metallb
For load balancing in a MicroK8s cluster, MetalLB can make use of
Ingress to properly balance across the cluster ( make sure you have
also enabled ingress in MicroK8s first, with microk8s enable ingress).
To do this, it requires a service. A suitable ingress service is
defined here:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
# loadBalancerIP is optional. MetalLB will automatically allocate an IP
# from its pool if not specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
You can save this file as ingress-service.yaml and then apply it with:
microk8s kubectl apply -f ingress-service.yaml
Now there is a load-balancer which listens on an arbitrary IP and
directs traffic towards one of the listening ingress controllers.
Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref
It seems to me that we cannot use IPv6 in Kubernetes service loadBalancerSourceRanges. I simplified the repro to a very simple configuration like below:
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
type: "LoadBalancer"
loadBalancerSourceRanges:
- "2600:1700:aaaa:aaaa::aa/32"
ports:
- port: 5678 # Default port for image
Deploying it on GKE and I got the following failure when I "kubectl describe service apple-service":
Warning KubeProxyIncorrectIPVersion 13m (x11 over 62m) kube-proxy, gke-xxxx
2600:1700:aaaa:aaaa::aa/32 in loadBalancerSourceRanges has incorrect IP version
Normal EnsuringLoadBalancer 51s (x18 over 62m) service-controller
Ensuring load balancer
Warning SyncLoadBalancerFailed 46s (x17 over 62m) service-controller
Error syncing load balancer: failed to ensure load balancer: googleapi: Error 400: Invalid
value for field 'resource.sourceRanges[1]': '2600:1700::/32'. Must be a valid IPV4 CIDR address range., invalid
Just want to confirm my conclusion (i.e. that this is not supported in k8s), or, if my conclusion is not correct, what is the fix. Maybe there is a way for the whole cluster to be on IPv6 so that this will work?
Thank you very much!
You are seeing this error because IPv6 cannot be used along IPv4 in k8s (you could run k8s in ipv6-only mode, but this would not work in GCP since GCP does not allow to use ipv6 addresses for internal communication).
GCP VPC docs:
VPC networks only support IPv4 unicast traffic. They do not support broadcast, multicast, or IPv6 traffic within the network; VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. However, it is possible to create an IPv6 address for a global load balancer.
K8s 1.16+ provides dual stack feature that is in early development (alpha) stage that allows for IPv6 and can be enabled with feature-gates, but since you are using GKE, controlplain is managed by GCP so you can't enable it (and since is alpha feature you probably should not want to).
You can find a bit more about this dual stack feature here:
dual-stack
and here:
validate-dual-stack
Here is the latest pull request I have found on github relating this feature: https://github.com/kubernetes/kubernetes/pull/91824
I think that we can expect that the beta version will appear soon in one of k8s releases but since GKE is about two versions behind the latest relese, I infere that it will take some time before we can use IPv6 with GKE.
I created a regional static IP in the same region of the cluster and I'm trying to use it with a LoadBalancer:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
loadBalancerIP: "x.x.x.x"
However, I don't know why I am getting this error:
Error creating load balancer (will retry): failed to ensure load balancer for service default/ambassador: requested ip "x.x.x.x" is neither static nor assigned to the LB
Edit: Problem solved but ..
When I created the static IP address, I used:
gcloud compute addresses create regional-ip --region europe-west1
I used this address with the Service.
It didn't work like I said.
However, when I created an external static regional IP using the web console, the IP worked fine with my Service and it was attached without problems.
My bet is that the source IP service is not exposed then. As the official docs say:
As of Kubernetes 1.5, packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Try this command to expose the source IP service to the loadbalancer:
kubectl expose deployment <source-ip-app> --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
On this page, you will find more guidance and a number of diagnostic commands for sanity check.
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
The IP whitelisting/blacklisting example explained here https://kubernetes.io/docs/tutorials/services/source-ip/ uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use request.headers["X-Real-IP"], however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac.
https://github.com/istio/istio/issues/7328 mentions this issue and states:
With a proxy that terminates the client connection and opens a new connection to your nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that of the client.
With a packet forwarder, such that requests from the client sent to the loadbalancer VIP end up at the node with the source IP of the client, not an intermediate proxy.
Loadbalancers in the first category must use an agreed upon protocol between the loadbalancer and backend to communicate the true client IP such as the HTTP X-FORWARDED-FOR header, or the proxy protocol.
Can someone please help how can we define a protocol to get the client IP from the loadbalancer?
Maybe your are confusing with kube-proxy and istio, by default Kubernetes uses kube-proxy but you can install istio that injects a new proxy per pod to control the traffic in both directions to the services inside the pod.
With that said you can install istio on your cluster and enable it for only the services you need and apply a blacklisting using the istio mechanisms
https://istio.io/docs/tasks/policy-enforcement/denial-and-list/
To make a blacklist using the source IP we have to leave istio manage how to fetch the source IP address and use som configuration like this taken from the docs:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
compiledTemplate: listentry
params:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip
instances: [ sourceip ]
---
You can use the param providerURL to maintain an external list.
Also check to use externalTrafficPolicy: Local on the ingress-gateway servce of istio.
As per comments my last advice is to use a different ingress-controller to avoid the use of kube-proxy, my recomendation is to use the nginx-controller
https://github.com/kubernetes/ingress-nginx
You can configure this ingress as a regular nginx acting as a proxy