I am trying to setup access to an external MySQL database that is setup using MariaDB PXC three node cluster.
Let say my external database nodes has these IP addresses
172.16.10.100
172.16.10.101
172.16.10.102
In case one of those nodes goes down, I want Kubernetes to automatically route traffic to only available two nodes.
If I create a simple Service and Endpoints in kubernetes (showing below), does it automatically do a failover?
#
# Service
#
kind: Service
apiVersion: v1
metadata:
name: mariadb-service
spec:
clusterIP: None
sessionAffinity: None
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
#
# Endpoints
#
kind: Endpoints
apiVersion: v1
metadata:
name: mariadb-service
subsets:
- addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
protocol: TCP
Please note this:
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. The implementations of network load balancers that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
I found the option to have LoadBalancer with on premises Kubernetes cluster - that is MetalLB
MetalLB aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.
See the requirements and the installation options you prefer.
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter. MetalLB also supports requesting a specific address pool, if you want a certain kind of address but don’t care which one exactly. To request assignment from a specific pool, add the metallb.universe.tf/address-pool annotation to your service, with the name of the address pool as the annotation value. For example:
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
And for address-pool annotation you can define it something like:
address-pools:
name: production-public-ips
protocol: TCP
addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
Find full example usage here.
However, there is no health check implemented, that is main option for you.
There is an ideal issue example on GitHub and this thread explaining on why health check is not implemented for Kubernetes CRDs.
If we jump into Kubernetes concepts, this use case is not doable and you can seek for some custom endpoint controller:
readinessProbe: Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
Please find more information in official documentation
Related
I am using HAProxy as the ingress-controller in my GKE clusters. And exposing HAProxy service as LoadBalancer service(Internal).
Recently, I experienced an issue, where the HA-Proxy service changed its EXTERNAL-IP, and traffic stopped routing to HAProxy. This issue occurred multiple times on different days(now it has stopped). I had to manually add that new External-IP to the frontend of that Loadbalancer to allow traffic to HAProxy.
There were two pods running for HAProxy, and both had been running for days, and there was nothing in their logs. I assume it was something related to Service or GCP LB and not HAProxy itself.
I am afraid that I don't have any logs related to that.
I still don't know, what caused the service IP to change. As there were no recent changes, and the cluster and all services were running for many days properly, and suddenly this occurred.
Has anyone faced a similar issue earlier? Or what can I do to avoid such issue in future?
What could have caused the IP to change?
This is how my service is configured:
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
Found some logs:
Warning SyncLoadBalancerFailed 30m (x3570 over 13d) service-controller Error syncing load balancer: failed to ensure load balancer: googleapi: Error 409: IP_IN_USE_BY_ANOTHER_RESOURCE - IP '10.17.129.17' is already being used by another resource.
Normal EnsuringLoadBalancer 3m33s (x3576 over 13d) service-controller Ensuring load balancer
The Short answer is: External IP for the service are ephemeral.
Because HA-Proxy controller pods are recreated the HA-Proxy service is created with an ephemeral IP.
To avoid this issue, I would recommend using a static IP that you can reference in the loadBalancerIP field.
This can be done by following steps:
Reserve a static IP. (link)
Use this IP, to create a service (link)
Example YAML:
apiVersion: v1
kind: Service
metadata:
name: helloweb
labels:
app: hello
spec:
selector:
app: hello
tier: web
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Unfortunately without logs it's hard to say anything for sure. You should check the audit logs that GKE ships to Cloud Logging as that might give you some idea of what happened. One option is the GCP "oops"'d the GLB and GKE recreated it, thus giving it a new IP. I've never heard of that happening with LBs though (it happens pretty often with nodes, but not LBs). A more common case would be you ran some kubectl command that inadvertently removed the Service object and then it was recreated by some management layer you have set up (Argo, Flux, Helm Operator, whatever) but delete+recreate again means it's a new LB with a new IP. The latter case should be visible in the audit logs so check those out for sure.
Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.
Say for example below is a service for a broker
apiVersion: v1
kind: Service metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9092
name: outside
targetPort: 9092
selector: app: kafka kafka-pod-id: "0"
What is port and targetPort?
Do I setup LoadBalancer service for each of the brokers?
Do these multiple brokers get mapped to single public IP address of cloud LB?
How does a service outside k8s/cloud access individual broker? By using public-ip:port? or by using kafka-<pod-id>.kafka.my.company.com:port?. Also which port is used here? port or targetPort?
How do I specify this configuration in Kafka broker's Advertised.listeners property? As port can be different for services inside k8s cluster and outside it.
Please help.
Based on the information you provided I will try give you some answers, eventually give some advise.
1) port: is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against port in the service spec file.
targetPort: is the port on the POD where the service is running. Your application needs to be listening for network requests on this port for the service to work.
2/3) Each Broker should be exposed as LoadBalancer and be configured as headless service for internal communication. There should be one addiational LoadBalancer with external ip for external connection.
Example of Service
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
pod-name: kafka-0
type: LoadBalancer
4) You have to use kafka-<pod-id>.kafka.my.company.com:port
5) It should be set to the external addres so that clients can connect to it. This article might help with understanding.
Similar case was on Github, it might help you also - https://github.com/kow3ns/kubernetes-kafka/issues/3
In addition, You could also think about Ingress - https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/
I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.
I use Google Container Engine with Kubernetes.
I have created an https load balancer which terminates ssl and forwards traffic to k8s cluster nodes. The problem is I see no option to whitelist/filter incoming ip addresses. Is there any?
It sounds like you've set up a load balancer outside of Kubernetes. You may want to consider using a Kubernetes Service set to type: LoadBalancer. That type of service will give you an external IP that load balances to all of your Pods and can be easily restricted to whitelist IPs using the loadBalancerSourceRanges setting. Here is the example from the docs at https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
If you're using gce controller is not yet possible[1], just nginx controller[2] accept whitelist ip.
[1] https://github.com/kubernetes/ingress/issues/566
[2] https://github.com/kubernetes/ingress/blob/188c64aaac17ef29400e0f143b9aed7770e32fee/controllers/nginx/configuration.md#whitelist-source-range
I have k8s cluster in Google Cloud and frontend/api containers there.
In addition to that I have ElasticSearch cluster (es1,es2) in Google Cloud but not under k8s.
I can access api container by name 'api' from container 'frontend'.
I can access es1 from es2 by name 'es1'.
What is a right way to access es cluster by names like es1,es2 from containers api/frontend ?
Thanks!
If you want to refer to resources outside of your Kubernetes cluster as if they are inside the cluster, the simplest option is probably to assign each ElasticSearch instance a static IP, and then specify a Kubernetes Service with manual Endpoints:
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
You can add an Endpoint entry for each es instance you have. Now your internal cluster DNS will resolve and load-balance for your es instances as if they were pods in the cluster.
You create endpoints like this (note the lack of selector on the Service):
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9376
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
Now you can access the service using my-service.my-namespace.svc.cluster.local as normal.
Create the Kubernetes Cluster (GKE) in the same network/subnetwork like your instances, then you can connect the internal Ip´s of your GCE instances from your k8s pods.