We need to learn envoy well enough to create a service mesh. In the Envoy Documentation they talk about "Clusters" without defining the term. Are they talking about Kubernetis Clusters, or does this term have a specific meaning when configuring Envoy? (for a cluster of servers)
You can find the definition in the terminology documentation:
Cluster: A cluster is a group of logically similar upstream hosts that Envoy connects to. Envoy discovers the members of a cluster via service discovery. It optionally determines the health of cluster members via active health checking. The cluster member that Envoy routes a request to is determined by the load balancing policy.
Only the first sentence (A cluster is a group of logically similar upstream hosts that Envoy connects to.) is needed to understand what a cluster is. It has nothing to do with Kubernetes, cluster is an Envoy term.
Let's say that you have two hosts running the same service, and you want that Envoy connects to one of these hosts (load-balancing the traffic), then you will define a cluster with these two hosts:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: service
clusters:
- name: service
connect_timeout: 15s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.43
port_value: 80
- endpoint:
address:
socket_address:
address: 10.0.0.44
port_value: 80
In this example, a request made by a client to Envoy on port 8080 will be forwarded to one of the cluster hosts (10.0.0.43:80 or 10.0.0.44:80).
You can find more documentation about clusters here: https://www.envoyproxy.io/docs/envoy/v1.21.1/intro/arch_overview/upstream/upstream.
Related
I am trying to setup access to an external MySQL database that is setup using MariaDB PXC three node cluster.
Let say my external database nodes has these IP addresses
172.16.10.100
172.16.10.101
172.16.10.102
In case one of those nodes goes down, I want Kubernetes to automatically route traffic to only available two nodes.
If I create a simple Service and Endpoints in kubernetes (showing below), does it automatically do a failover?
#
# Service
#
kind: Service
apiVersion: v1
metadata:
name: mariadb-service
spec:
clusterIP: None
sessionAffinity: None
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
#
# Endpoints
#
kind: Endpoints
apiVersion: v1
metadata:
name: mariadb-service
subsets:
- addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
protocol: TCP
Please note this:
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. The implementations of network load balancers that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
I found the option to have LoadBalancer with on premises Kubernetes cluster - that is MetalLB
MetalLB aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.
See the requirements and the installation options you prefer.
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter. MetalLB also supports requesting a specific address pool, if you want a certain kind of address but don’t care which one exactly. To request assignment from a specific pool, add the metallb.universe.tf/address-pool annotation to your service, with the name of the address pool as the annotation value. For example:
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
And for address-pool annotation you can define it something like:
address-pools:
name: production-public-ips
protocol: TCP
addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
Find full example usage here.
However, there is no health check implemented, that is main option for you.
There is an ideal issue example on GitHub and this thread explaining on why health check is not implemented for Kubernetes CRDs.
If we jump into Kubernetes concepts, this use case is not doable and you can seek for some custom endpoint controller:
readinessProbe: Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
Please find more information in official documentation
I install the bitnami/kafka cluster with helm.
I want to make producers and consumers not in k8s cluster, This is my helm install config yaml file.
replicaCount: 3
service:
type: LoadBalancer
loadBalancerIP: 192.168.99.110
nodePorts:
client: 25100
external: 25101
externalAccess:
enabled: true
service:
type: LoadBalancer
port: 9094
nodePorts:
- 25100
- 25101
loadBalancerIPs:
- 192.168.99.120
- 192.168.99.121
I expected each broker will advertise own address, but they are giving kubernetes internal domain address like kf-kafka-1.kf-kafka-headless.default.svc.cluster.local:9092
please help me what I miiss
I treid to connect port on externalAccess.service.nodePorts
but should use just {externalAccess.service.loadBalancerIPs[n]}:9004
thanks.
I want to load balance per request a mesh internal HTTP2 traffic coming to my ClusterIP Service over all its available replicas, using Istio; the first iteration is intended to work between two deployments within a single namespace, but I can't quite get there. I need to load balance on a non-standard port, I'm using standard port as a control group.
I was able to configure Istio so that requests from one long-lived connection to the service FQDN to standard port 80 are round robin'd correctly, but long-lived connection to a non-standard port such as 13080 will not round robin, instead a single pod will get all the requests (the behaviour looks like the K8s "iptables random" approach used in Service which only balances per connection, not per request).
Here's my most successful VirtualService definition yet:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
namespace: example
spec:
gateways:
- mesh
hosts:
- "*.example.com"
http:
- match:
- authority:
regex: "(.*.)?pods.example.com(:80)?"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 80
- match:
- authority:
regex: "(.*.)?pods.example.com:13080"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 13080
Ports are defined in the Service like this:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: http2-nonstd
port: 13080
protocol: TCP
targetPort: 13080
Using Istio 1.6.2. What am I missing?
EDIT: The original question had a typo in the VirtualService definition authority match for the port 13080 - there was exact instead of regex. Nothing changed, however. This supports the hypothesis that for some reason Istio ignores the non-standard port.
I'm trying to enable hairpin connections on my Kubernetes service, on GKE.
I've tried to follow the instructions here: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ to configure my kubelet config to enable hairpin mode, but it looks like my configs are never saved, even though the edit command returns without error.
Here is what I try to set when I edit node:
spec:
podCIDR: 10.4.1.0/24
providerID: gce://staging/us-east4-b/gke-cluster-staging-highmem-f36fb529-cfnv
configSource:
configMap:
name: my-node-config-4kbd7d944d
namespace: kube-system
kubeletConfigKey: kubelet
Here is my node config when I describe it
Name: my-node-config-4kbd7d944d
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
kubelet_config:
----
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"hairpinMode": "hairpin-veth"
}
I've tried both using "edit node" and "patch". Same result in that nothing is saved. Patch returns "no changes made."
Here is the patch command from the tutorial:
kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}"
I also can't find any resource on where the "hairpinMode" attribute is supposed to be set.
Any help is appreciated!
------------------- edit ----------------
here is why I think hairpinning isn't working.
root#668cb9686f-dzcx8:/app# nslookup tasks-staging.[my-domain].com
Server: 10.0.32.10
Address: 10.0.32.10#53
Non-authoritative answer:
Name: tasks-staging.[my-domain].com
Address: 34.102.170.43
root#668cb9686f-dzcx8:/app# curl https://[my-domain].com/python/healthz
hello
root#668cb9686f-dzcx8:/app# nslookup my-service.default
Server: 10.0.32.10
Address: 10.0.32.10#53
Name: my-service.default.svc.cluster.local
Address: 10.0.38.76
root#668cb9686f-dzcx8:/app# curl https://my-service.default.svc.cluster.local/python/healthz
curl: (7) Failed to connect to my-service.default.svc.cluster.local port 443: Connection timed out
also if I issue a request to localhost from my service (not curl), it gets a "connection refused." Issuing requests to the external domain, which should get routed to the same pod, is fine though.
I only have one service, one node, one pod, and two listening ports at the moment.
--------------------- including deployment yaml -----------------
Deployment
spec:
replicas: 1
spec:
containers:
- name: my-app
ports:
- containerPort: 8080
- containerPort: 50001
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTPS
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
backend:
serviceName: my-service
servicePort: 60000
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-service
servicePort: 60000
- path: /python/*
backend:
serviceName: my-service
servicePort: 60001
service
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: port
port: 60000
targetPort: 8080
- name: python-port
port: 60001
targetPort: 50001
type: NodePort
I'm trying to set up a multi-port application where the main program trigger a script to run through issuing a request on the local machine on a different port. (I need to run something in python but the main app is in golang.)
It's a simple script and I'd like to avoid exposing the python endpoints with the external domain, so I don't have to worry about authentication, etc.
-------------- requests sent from my-service in golang -------------
https://[my-domain]/health: success
https://[my-domain]/python/healthz: success
http://my-service.default:60000/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default:60001/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://localhost:50001/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
http://localhost:50001/python/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
Kubelet reconfiguration in GKE
You should not reconfigure kubelet in cloud managed Kubernetes clusters like GKE. It's not supported and it can lead to errors and failures.
Hairpinning in GKE
Hairpinning is enabled by default in GKE provided clusters. You can check if it's enabled by invoking below command on one of the GKE nodes:
ifconfig cbr0 |grep PROMISC
The output should look like that:
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
Where the PROMISC will indicate that the hairpinning is enabled.
Please refer to official documentation about debugging services: Kubernetes.io: Debug service: a pod fails to reach itself via the service ip
Workload
Basing only on service definition you provided, you should have an access to your python application on port 50001 with a pod hosting it with:
localhost:50001
ClusterIP:60001
my-service:60001
NodeIP:nodeport-port (check $ kubectl get svc my-service for this port)
I tried to run your Ingress resource and it failed to create. Please check how Ingress definition should look like.
Please take a look on official documentation where whole deployment process is explained with examples:
Kubernetes.io: Connect applications service
Cloud.google.com: Kubernetes engine: Ingress
Cloud.google.com: Kubernetes engine: Load balance ingress
Additionally please check other StackOverflow answers like:
Stackoverflow.com: Kubernetes how to access service if nodeport is random - it describes how you can access application in your pod
Stackoverflow.com: What is the purpose of kubectl proxy - it describes what happen when you create your service object.
Please let me know if you have any questions to that.
I have the following network policy for restricting access to a frontend service page:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: namespace-a
name: allow-frontend-access-from-external-ip
spec:
podSelector:
matchLabels:
app: frontend-service
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?
You might have a wrong understanding of the network policy(NP).
This is how you should interpret this section:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Open port 443 for outgoing traffic for all pods within 0.0.0.0/0 cidr.
The thing you are asking
how does Kubernetes know about that port, or is there a kind of port
mapping in the cluster so the traffic back to the client is on port
443 and gets mapped back to the clients original port when leaving the
cluster?
is managed by kube-proxy in following way:
For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.
Take a look at Kubernetes networking basics for a better understanding.