Can any body explain istio Subsets and destination rules in a a simple manner and explain the problem they are trying to solve by introducing the subsets.
DestinationRule is a resource that adds additional routing policies after routing happens to a Service, for example say that you have the following service:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
selector:
app: my-service
ports:
- name: http
protocol: TCP
port: 80
This Service can route to multiple resources, it picks up any pod which contains label app: my-service, which means you can have, for example, different versions of the same service running in parallel using one deployment for each.
Now, with a DestinationRule you can add additional routing policies on top of that, a subset means part of your pods which you can identify through labels, for example:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-service-ab
spec:
host: my-service.default.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: a-test
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
This DestinationRule uses a round robin load balancing policy for all traffic going to a subset named a-test that is composed of endpoints (e.g., pods) with labels (version:v3). This can be useful for scenarios such as A/B testing, or to keep multiple versions of you service running in parallel.
Also, you can specify custom TrafficPolicies for a subset that will override TrafficPolicies defined at a Service level.
Related
I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.
Some information about the configuration:
I'm using Kustomize to handle overlays and general data
I have a setting through kustomize to apply the namespace users to everything
Below are the config files I'm using, and the CRDs and RBAC are defined by calling
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
service.yml
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
ingress.yml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
If any more information is needed, just lmk. Thanks!
A default Traefik installation on Kubernetes creates two entrypoints:
web for http access, and
websecure for https access
But you have in your IngressRoute configuration:
entryPoints:
- service-port
Unless you have explicitly configured Traefik with an entrypoint named "service-port", this is probably your problem. You want to remove the entryPoints section, or specify something like:
entryPoints:
- web
If you omit the entryPoints configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via http:// and not via https://).
Not directly related to your problem, but if you're using Kustomize, consider:
Drop the app: users-service label from the deployment, the service selector, etc, and instead set that in your kustomization.yaml using the commonLabels directive.
Drop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your kustomization.yaml).
I've put together a deployable example with all the changes mentioned in this answer here.
CONTEXT:
I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).
We run Anthos service mesh (v1.12) in GKE clusters.
PROBLEM:
I need to find a way to do the following:
PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'
Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).
However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.
I'm using an istio ServiceEntry resource to try and achieve this, as follows:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
This doesn't appear to work and I'm getting Error: 14 UNAVAILABLE: upstream request timeout errors on PodA running in the new cluster.
I can confirm that running telnet to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).
Is there a limitation on what you can use in the hosts on a serviceentry? Does it have to be a .com or .org address?
The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.
Any ideas/comments appreciated, thanks.
Fortunately I came across someone with a similar (but not identical) issue, and the answer in this stackoverflow post gave me the outline of what kubernetes (and istio) resources I needed to create.
I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.
The end result was this:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com
I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
I'm setting up active/passive routing for an application in kube, that's outside of the typical K8S use case. I'm trying to find configuration related to routing or load balancing in headless services with multiple backends. So far I have managed to route traffic to my backends however I need to ensure that the traffic is routed correctly. The application requires TCP connections and the primary/secondary instances have differing configuration(requiring different deployment objects). If fail-over occurs, routing is expected to return to the primary once it is restored.
The routing consistently behaves as-desired, but no documentation or configuration would indicate as much. I've found documentation stating that it should be round-robin or random because of the order of the dns entries. The crux of the question is: can I depend on this behavior? Since this is undocumented and not explicitly configured I'm concerned that it will change in future versions, or deployments.
I'm using Rancher with the canal networking layer.
I've read through both the calico and flannel docs.
Neither Endpoints/endpoint slices, nor do dns entries indicate any order for routing.
Currently the setup has two deployments that are selected by a headless service. the deployed pods have a hostname of input-primary in deployment 1 and input-secondary in deployment 2. I can access either of them by dns as input-primary.myservice, or input-secondary.myservice.
the ingress controller tcp-services config map has an entry for my service:
25252: default/myservice:9999
and an abridged version of the k8s config:
ApiVersion: v1
kind: Service
metadata:
name:myservice
spec:
clusterIP: None
ports:
- name: input
port: 9999
selector:
app: myapp
type: ClusterIP
----
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: myapp
name: input-primary
spec:
hostname: input-primary
containers:
- ports:
- containerport: 9999
name: input
protocol: TCP
----
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: myapp
name: input-secondary
spec:
hostname: input-secondary
containers:
- ports:
- containerport: 9999
name: input
protocol: TCP```
I'm setting up a namespace in my kubernetes cluster to deny any outgoing network calls like http://company.com but to allow inter pod communication within my namespace like http://my-nginx where my-nginx is a kubernetes service pointing to my nginx pod.
How to achieve this using network policy. Below network policy helps in blocking all outgoing network calls
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
How to white list only the inter pod calls?
Using Network Policies you can whitelist all pods in a namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-to-sample
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
name: sample
As you probably already know, pods with at least one Network Policy applied to them can only communicate to targets allowed by any Network Policy applied to them.
Names don't actually matter. Selectors (namespaceSelector and podSelector in this case) only care about labels. Labels are key-value pairs associated with resources. The above example assumes the namespace called sample has a label of name=sample.
If you want to whitelist the namespace called http://my-nginx, first you need to add a label to your namespace (if it doesn't already have one). name is a good key IMO, and the value can be the name of the service, http://my-nginx in this particular case (not sure if : and / can be a part of a label though). Then, just using this in your Network Policies will allow you to target the namespace (either for ingress or egress)
- namespaceSelector:
matchLabels:
name: http://my-nginx
If you want to allow communication to a service called my-nginx, the service's name doesn't really matter. You need to select the target pods using podSelector, which should be done with the same label that the service uses to know which pods belong to it. Check your service to get the label, and use the key: value in the Network Policy. For example, for a key=value pair of role=nginx you should use
- podSelector:
matchLabels:
role: nginx
This can be done using the following combination of network policies:
# The same as yours
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
---
# allows connections to all pods in your namespace from all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-egress
namespace: sample
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels: {}
---
# allows connections from all pods in your namespace to all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-internal
namespace: sample
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels: {}
assuming your network policy implementation implements the full spec.
I am not sure if you can do it by using Kubernetes NetworkPolicy, but you can achieve this by Istio-enabled pods.
Note: First make sure that Istio installed on your cluster. For installation see.
See quote from Istio's documentation about Egress Traffic.
By default, Istio-enabled services are unable to access URLs outside
of the cluster because the pod uses iptables to transparently redirect
all outbound traffic to the sidecar proxy, which only handles
intra-cluster destinations.
Also, you can whitelist domains outside of the cluster by adding ServiceEntry and VirtualService to your cluster, example in Configuring the external services in Istio documentation.
I hope it can be useful for you.