I am totally new to Istio, and the pitch looks very exciting. However, I can't make it work, which probably means I don't use it properly.
My goal is to implement session affinity between 2 services, this is why originally I end up using Istio. However, I do a very basic test, and it does not seem to work:
I have an kubernetes demo app which as a front service, a stateful-service, and a stateless service. From a browser, I access the front-service which dispatches the request either on the stateful or stateless services, using K8s service names as url: http://api-stateful or http://api-stateless.
I want to declare a virtual service to intercept requests sent from the front-service to the stateful-service.I don't declare it as a gateway, as I understood gateway was at the external border of the K8s cluster.
I use Docker on Windows with Istio 1.6.
I copy my yaml file below. the basic test I want to do: reroute traffic for api-stateful to api-stateless, to validate that the Virtual Service is taken into account. And it does not work. Do you see what is wrong? Is it a wrong usage of Virtual Service ? My Kiali console does not detect any problem in setup.
####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 3500
targetPort: http
nodePort: 30000
---
##############################################################
############ ISTIO PROXY FOR API-STATEFUL SERVIC E############
##############################################################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-stateful-proxy
spec:
hosts:
- api-stateful
http:
- route:
- destination:
host: api-stateless
As mentioned in comments this can be fixed with DestinationRule with sticky session configuration.
Example of that can be found in istio documentation:
LoadBalancerSettings
Load balancing policies to apply for a specific destination. See Envoy’s load balancing documentation for more details.
For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
Related
I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local
Is it possible to add IAP on top of the internal load balancer?
In my setup I have:
Published and configured oauth screen as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#enabling_iap
Certificates for my internal domain
Kubernetes service setup with the backendConfig (as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#add-iap-to-backendconfig)
Application running in kubernetes
Kubernetes Ingress that creates internal load balancer with the proper ssl certificates
On my local machine i have /etc/hosts pointing to the load balancer.
I am able to authenticate when accessing the app but then i get "You don't have access" page instead of my application.
In my GCP console I can't see the internal load balancer listed on the IAP page. I can only see external load balancers.
As per this documentation https://cloud.google.com/iap/docs/concepts-overview#your_responsibilities it looks like IAP is supported with HTTPS ILB.
Is iLB with IAP supported on GCP?
My k8s config
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: config-default
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: my-secret
-------------------------------
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/backend-config: '{"default": "config-default"}'
cloud.google.com/neg: '{"ingress": true}'
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: host1
- port: 443
targetPort: 8080
protocol: TCP
name: host2
--------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ilb-demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: "titan-testing-ilb"
kubernetes.io/ingress.regional-static-ip-name: "my-internal-address"
spec:
defaultBackend:
service:
name: ilb-service
port:
number: 443
-------------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"
ports:
- containerPort: 8080
protocol: TCP
User Type:
I have experienced the same issue at some point and it turned out that I had forgotten to set the user type to "external".
This is if you have managed to get past the login screen of OAuth.
Here is my problem: I have 3 services defined in a kubernetes yaml file:
one front-end (website)
one back-end : stateful, for user sessions
one back-end : stateless
I need session affinity on the stateful service, but not on the stateless nor front-end service. I need the session affinity to be cookie-based, not clientIP based.
mydomain/stateful ===> Front-End Service (3 pods) ===> Stateful Service (3 pods, need session affinity)
mydomain/stateless ===> Front-End Service (3 pods) ===> Stateless Service (3 pods, do not need session affinity)
I tried to use Ingress service, but I fail to see how I can use it as a proxy in-between 2 services inside the Kubernetes Cluster. All the examples I see show how to use Ingress as a router for request coming from outside the Cluster.
Here is my poc.yaml so far:
####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
#sessionAffinity: ClientIP
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: LoadBalancer
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 85
targetPort: http
Do you know how to solve my problem?
Thanks!
Natively Kubernetes itself does not provide session affinity on service [concept] level.
The only way that comes to my mind is to use Istio and it's Destination Rules. Taken from the istio manual:
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
This document shows how to to configure sticky session with istio.
For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.
Given the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: server
selector:
app: nginx
How would one configure the Service and Deployment here (or if needed, an Ingress object) so that when a Pod takes more than n seconds to return a HTTP response, the Service will try the request on another nginx-deployment Pod?
Kubernetes Services are based on simple iptables rules.
Traffic is NAT'ed only to destination pod. There are no layers you can adjust, for example, timeouts and set quality of services based on it.