Ingress affinity session max age - kubernetes

I am using ingress affinity session in order to keep communication between a client and a pod. Because sticky session could cause some overloading to a pod (the clients keep same pod).
I'm looking for best practices about the parameter nginx.ingress.kubernetes.io/session-cookie-max-age.
The example value is 172 800 (second) which mean 48 hours.
Why? It's a huge duration, is it possible to set up it to 30 minutes?
By the way, what happens when the application session has expired? Does the ingress rebalance the client or keep the same pod?

This is an example documentation, you don't need to use the exact values provided in it.
You can set it up to any value you want, however setting up max-age and expires to too short periods of time, backend will be rebalanced too often. This the answer to another question - yes, ingress will rebalance the client.
There are two optional attributes you can use related to its age:
Expires=<date>
Indicates the maximum lifetime of the cookie as an HTTP-date timestamp. In case of ingress, it's set up as a number.
Max-Age=<number>
Indicates the number of seconds until the cookie expires. A zero or negative number will expire the cookie immediately.
Important! If both Expires and Max-Age are set, Max-Age has precedence.
Below is a working example with cookie max-age and expires set to 30 minutes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-cookie-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "test-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "1800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "1800"
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-name
port:
number: 80
And checking that it works performing a curl request (removed unnecessary details):
$ curl -I example.com
HTTP/1.1 200 OK
Date: Mon, 14 Mar 2022 13:14:42 GMT
Set-Cookie: test-cookie=1647263683.046.104.525797|ad50b946deebe30052b8573dcb9a2339; Expires=Mon, 14-Mar-22 13:44:42 GMT; Max-Age=1800; Path=/; HttpOnly

Related

How to pass extra http headers to Okteto pod?

I've deployed the Duende IdentityServer to Okteto Cloud: https://id6-jeff-tian.cloud.okteto.net/.
Although the endpoint is https from the outside, the inside pods still think they are behind HTTP protocol. You can check the discovery endpoint to find out: https://id6-jeff-tian.cloud.okteto.net/.well-known/openid-configuration
That causes issues during some redirecting. So how to let the inner pods know that they are hosted in https scheme?
Can we pass some headers to the IdP to tell it the original https schema?
These headers should be forwarded to the inner pods:
X-Forwarded-For: Holds information about the client that initiated the request and subsequent proxies in a chain of proxies. This parameter may contain IP addresses and, optionally, port numbers.
X-Forwarded-Proto: The value of the original scheme, should be https in this case.
X-Forwarded-Host: The original value of the Host header field.
I searched from some aspnet documentations and found this: https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?source=recommendations&view=aspnetcore-6.0, however, I don't know how to configure the headers in Okteto, or in any k8s cluster.
Is there anyone who can shed some light here?
My ingress configurations is as follows (https://github.com/Jeff-Tian/IdentityServer/blob/main/k8s/app/ingress.yaml):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: id6
annotations:
dev.okteto.com/generate-host: id6
spec:
rules:
- http:
paths:
- backend:
service:
name: id6
port:
number: 80
path: /
pathType: ImplementationSpecific
The headers that you mention are being added to the request when it’s forwarded to your pods.
Could you dump the headers on the receiving end?
Not familiar with Duende, but does it have a setting to specify the “public URL”? That’s typically what I’ve done in the past for similar setups.

Can't access grafana through kong ingress controller for kubernetes

I’m trying to expose grafana application to the internet following the below steps:
Applying helm chart as refrenced here https://github.com/Kong/kubernetes-ingress-controller
helm install kong/kong --generate-name --set ingressController.installCRDs=false
Then applying ingress rule as below
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /grafana
backend:
serviceName: prom-stack-grafana
servicePort: 3000
" | kubectl apply -f -
Services are up and running correctly and pointing to the right end points
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kong-1602803623-kong-proxy LoadBalancer 10.0.X.Y W.X.Y.Z 80:32218/TCP,443:30596/TCP
prom-stack-grafana ClusterIP 10.0.X.Y 3000/TCP
NAME ENDPOINTS
kong-1602803623-kong-proxy 10.244.X.Y:8443,10.244.X.Y:8000
prom-stack-grafana 10.244.X.Y:3000
Kong controller and ingress are running in the same namespace
Now the issue is that when I;m trying to access grafana through
curl -i $PROXY_IP/grafana and the same from the browser"empty page after redirection to /login"
I got redirected to /login with no output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 100 29 0 0 82 0 --:–:-- --:–:-- --:–:-- 82HTTP/1.1 302 Found
Content-Type: text/html; charset=utf-8
Content-Length: 29
Connection: keep-alive
Cache-Control: no-cache
Expires: -1
Location: /login
Pragma: no-cache
Set-Cookie: redirect_to=%2Fgra; Path=/; HttpOnly; SameSite=Lax
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-Xss-Protection: 1; mode=block
Date: Thu, 15 Oct 2020 23:31:30 GMT
X-Kong-Upstream-Latency: 2
X-Kong-Proxy-Latency: 0
Via: kong/2.1.4
Found
Need to know what is missing here to get redirected to the home page of grafana
Have you configured the root url in Grafana?
You have configured the Ingress to serve Grafana at /grafana, so you need to let Grafana know it needs to add this as a prefix to it's paths.
Configuration slightly changes based on the version of Grafana you are using, but you will need to set root_url. If you are using a newer version of Grafana you may also need to set serve_from_sub_path and domain.

control headers and routing depcrecated in istio 1.6

I need to route my traffic based on the headers using istio but this option is deprecated in istio1.6 version. Why control headers and routing is deprecated in istio?
As mentioned in istio documentation
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Consider using Envoy ext_authz filter, lua filter, or write a filter using the Envoy-wasm sandbox.
Control headers and routing are not deprecated, it's just mixer which was used to do that. There are different ways to do that now, as mentioned above.
I'm not sure what exactly you want to do, but take a look at envoy filter and virtual service.
Envoy filter
There is envoy filter which add some custom headers to all the outbound responses
kind: EnvoyFilter
metadata:
name: lua-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:logInfo(" ========= XXXXX ========== ")
response_handle:headers():add("X-User-Header", "worked")
end
And tests from curl
$ curl -s -I -X HEAD x.x.x.x/
HTTP/1.1 200 OK
server: istio-envoy
date: Mon, 06 Jul 2020 08:35:37 GMT
content-type: text/html
content-length: 13
last-modified: Thu, 02 Jul 2020 12:11:16 GMT
etag: "5efdcee4-d"
accept-ranges: bytes
x-envoy-upstream-service-time: 2
x-user-header: worked
Few links worth to check about that:
https://blog.opstree.com/2020/05/27/ip-whitelisting-using-istio-policy-on-kubernetes-microservices/
https://github.com/istio/istio/wiki/EnvoyFilter-Samples
https://istio.io/latest/docs/reference/config/networking/envoy-filter/
Virtual Service
Another thing worth to check here would be virtual service, you can do header routing based on matches here.
Take a look at example from istio documentation
HttpMatchRequest specifies a set of criterion to be met in order for the rule to be applied to the HTTP request. For example, the following restricts the rule to match only requests where the URL path starts with /ratings/v2/ and the request contains a custom end-user header with value jason.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings.prod.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
uri:
prefix: "/ratings/v2/"
ignoreUriCase: true
route:
- destination:
host: ratings.prod.svc.cluster.local
Additionally there is my older example with header based routing in virtual service.
Let me know if you have any more questions.

Exposing virtual service with istio and mTLS globally enabled

I've this configuration on my service mesh:
mTLS globally enabled and meshpolicy default
simple-web deployment exposed as clusterip on port 8080
http gateway for port 80 and virtualservice routing on my service
Here the gw and vs yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # Specify the ingressgateway created for us
servers:
- port:
number: 80 # Service port to watch
name: http-gateway
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-web
spec:
gateways:
- http-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /simple-web
rewrite:
uri: /
route:
- destination:
host: simple-web
port:
number: 8080
Both vs and gw are in the same namespace.
The deployment was created and exposed with these commands:
k create deployment --image=yeasy/simple-web:latest simple-web
k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web
and with k get pods I receive this:
pod/simple-web-9ffc59b4b-n9f85 2/2 Running
What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error.
If I try to curl from ingressgateway pod I can reach the simple-web service.
Why I can't reach the website with mTLS enabled? What's the correct configuration?
As #suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.
If you chose to upgrade istio to newer version please review documentation 1.3 Upgrade Notice and Upgrade Steps as Istio is still in development and changes drastically with each version.
Also as mentioned in comments by #Manuel Castro this is most likely issue addressed in Avoid 503 errors while reconfiguring service routes and newer version simply handles them better.
Creating both the VirtualServices and DestinationRules that define the
corresponding subsets using a single kubectl call (e.g., kubectl apply
-f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e.,
Kubernetes API server) to the Pilot instances in an eventually
consistent manner. If the VirtualService using the subsets arrives
before the DestinationRule where the subsets are defined, the Envoy
configuration generated by Pilot would refer to non-existent upstream
pools. This results in HTTP 503 errors until all configuration objects
are available to Pilot.
It should be possible to avoid this issue by temporarily disabling mTLS or by using permissive mode during the deployment.
I just installed istio-1.3.2, and k8s 1.15.1, to reproduced your issue, and it worked without any modifications. This is what I did:
0.- create a namespace called istio and enable sidecar injection automatically.
1.- $ kubectl run nginx --image nginx -n istio
2.- $ kubectl expose deploy nginx --port 8080 --target-port 80 --name simple-web -n istio
3.- $kubectl craete -f gw.yaml -f vs.yaml
Note: these are your files.
The test:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:04:26 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 24 Sep 2019 14:49:10 GMT
etag: "5d8a2ce6-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4
[2019-10-11T10:04:26.101Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 6 4 "10.132.0.36" "curl/7.52.1" "4bbc2609-a928-9f79-9ae8-d6a3e32217d7" "a.b.c.d:31380" "192.168.171.73:80" outbound|8080||simple-web.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:37078 - -
And to be sure mTLS was enabled, this is from ingress-gateway describe command:
--controlPlaneAuthPolicy
MUTUAL_TLS
So, I don't know what is wrong, but you might want to go through these steps and discard things.
Note: the reason I am attacking istio gateway on port 31380 is because my k8s is on VMs right now, and I didn't want to spin up a GKE cluster for a test.
EDIT
Just deployed another deployment with your image, exposed it as simple-web-2, and worked again. May be I'm lucky with istio:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:28:45 GMT
content-type: text/html
content-length: 354
last-modified: Fri, 11 Oct 2019 10:28:46 GMT
x-envoy-upstream-service-time: 4
[2019-10-11T10:28:46.400Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 5 4 "10.132.0.36" "curl/7.52.1" "df0dd00a-875a-9ae6-bd48-acd8be1cc784" "a.b.c.d:31380" "192.168.171.65:80" outbound|8080||simple-web-2.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:42980 - -
What's your k8s environment?
EDIT2
# istioctl authn tls-check curler-6885d9fd97-vzszs simple-web.istio.svc.cluster.local -n istio
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
simple-web.istio.svc.cluster.local:8080 OK mTLS mTLS default/ default/istio-system

NGINX ingress controller timing out request after 60s

When a request takes over 60s to respond it seems that the ingress controller will bounce
From what I can see our NGINX ingress controller returns 504 to the client after a request takes more than 60s to process. I can see this from the NGINX logs:
2019/01/25 09:54:15 [error] 2878#2878: *4031130 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.244.0.1, server: myapplication.com, request: "POST /api/text HTTP/1.1", upstream: "http://10.244.0.39:45606/api/text", host: "myapplication.com"
10.244.0.1 - [10.244.0.1] - - [25/Jan/2019:09:54:15 +0000] "POST /api/text HTTP/1.1" 504 167 "-" "PostmanRuntime/7.1.6" 2940 60.002 [default-myapplication-service-80] 10.244.0.39:45606 0 60.000 504 bdc1e0571e34bf1223e6ed4f7c60e19d
The second log item shows 60 seconds for both upstream response time and request time (see NGINX log format here)
But I have specified all the timeout values to be 3 minutes in the ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aks-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/send_timeout: "3m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3m"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3m"
spec:
tls:
- hosts:
- myapplication.com
secretName: tls-secret
rules:
- host: myapplication.com
http:
paths:
- path: /
backend:
serviceName: myapplication-service
servicePort: 80
What am I missing?
I am using nginx-ingress-1.1.0 and k8s 1.9.11 on Azure (AKS).
The issue was fixed by provided integer values (in seconds) for these annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
It seems that this variation of the NGINX ingress controller requires such.
Because you appear to be using the actual ingress from ngnix.com, you need to use nginx.org/proxy-connect-timeout: "3m" style annotations, as one can see in
their example
I am still pretty sure that my debugging trick of kubectl cp-ing the nginx.conf off the controller Pod would have helped you debug that situation on your own, but for sure reading the documentation for your ingress controller will go a long way, too
While this might not matter to you, their latest release is also 1.4.3 so I hope you are on an old version on purpose.