Can't access grafana through kong ingress controller for kubernetes - kubernetes

Iā€™m trying to expose grafana application to the internet following the below steps:
Applying helm chart as refrenced here https://github.com/Kong/kubernetes-ingress-controller
helm install kong/kong --generate-name --set ingressController.installCRDs=false
Then applying ingress rule as below
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /grafana
backend:
serviceName: prom-stack-grafana
servicePort: 3000
" | kubectl apply -f -
Services are up and running correctly and pointing to the right end points
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kong-1602803623-kong-proxy LoadBalancer 10.0.X.Y W.X.Y.Z 80:32218/TCP,443:30596/TCP
prom-stack-grafana ClusterIP 10.0.X.Y 3000/TCP
NAME ENDPOINTS
kong-1602803623-kong-proxy 10.244.X.Y:8443,10.244.X.Y:8000
prom-stack-grafana 10.244.X.Y:3000
Kong controller and ingress are running in the same namespace
Now the issue is that when I;m trying to access grafana through
curl -i $PROXY_IP/grafana and the same from the browser"empty page after redirection to /login"
I got redirected to /login with no output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 100 29 0 0 82 0 --:ā€“:-- --:ā€“:-- --:ā€“:-- 82HTTP/1.1 302 Found
Content-Type: text/html; charset=utf-8
Content-Length: 29
Connection: keep-alive
Cache-Control: no-cache
Expires: -1
Location: /login
Pragma: no-cache
Set-Cookie: redirect_to=%2Fgra; Path=/; HttpOnly; SameSite=Lax
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-Xss-Protection: 1; mode=block
Date: Thu, 15 Oct 2020 23:31:30 GMT
X-Kong-Upstream-Latency: 2
X-Kong-Proxy-Latency: 0
Via: kong/2.1.4
Found
Need to know what is missing here to get redirected to the home page of grafana

Have you configured the root url in Grafana?
You have configured the Ingress to serve Grafana at /grafana, so you need to let Grafana know it needs to add this as a prefix to it's paths.
Configuration slightly changes based on the version of Grafana you are using, but you will need to set root_url. If you are using a newer version of Grafana you may also need to set serve_from_sub_path and domain.

Related

How to access kubernetes dashboard with ingress and minikiube

I'm trying to follow a tutorial of exposing the k8s dashboard using minikube and ingress. Basically what I have is the following ingress blueprint:
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"labels":{"name":"dashboard-ingress"},"name":"dashboard-ingress","namespace":"kubernetes-dashboard"},"spec":{"rules":[{"host":"dashboard.com","http":{"paths":[{"backend":{"service":{"name":"kubernetes-dashboard","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}]}}
creationTimestamp: "2023-01-11T12:41:25Z"
generation: 1
labels:
name: dashboard-ingress
name: dashboard-ingress
namespace: kubernetes-dashboard
resourceVersion: "213743"
uid: ffe793ff-b985-4560-84d3-981007f7f309
spec:
ingressClassName: nginx
rules:
- host: dashboard.com
http:
paths:
- backend:
service:
name: kubernetes-dashboard
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.49.2
kind: List
metadata:
In my host file I have added the following line:
192.168.49.2 dashboard.com
That is how my host file looks like:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
127.0.0.1 dashboard.com
# End of section
When I curl dashboard.com I get the following output:
* Trying 127.0.0.1:80...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Trying 2606:4700:3032::6815:11fe:80...
* Connected to dashboard.com (2606:4700:3032::6815:11fe) port 80 (#0)
> GET / HTTP/1.1
> Host: dashboard.com
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Date: Wed, 11 Jan 2023 17:37:17 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Location: http://www.dashboard.com/
< CF-Cache-Status: DYNAMIC
< Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=Fk%2F5iFEfMkDw1N8ej6xCnOz%2FvdhxnAz2Dg0NS8MtwjhopPZnCvJdt%2Fb6GNLtpB%2BK2TAVf11%2BYjCn4GSVQCWWhJvGlB97DE%2Bltvfn4TOSdNl1pKx0ev8I%2F3ik9HqCdXktIaAmYVzNhPyBw0%2Ba"}],"group":"cf-nel","max_age":604800}
< NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< Server: cloudflare
< CF-RAY: 787f6b60ce8a01b6-GRU
< alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.22.0</center>
</body>
</html>
* Connection #0 to host dashboard.com left intact
I can normally access the dashboard by running minikube dashboard.
I'm new in k8s. As I can see, the IP address 192.168.49.2 is an external IP address. Adding it to my host file won't help I guess as it does not work anyway. (minikube tunnel has to be configured, I tried it several times, but it does not work)
I'm using Mac M1.
My docker version is: 20.10.21
My minikube version is: 1.28.0
My kubectl version is: 1.25.2
What am I missing?
So, I did it by closing my terminal with the tunnel and starting a new one. Basically with minikube, any new change that affects external IP addresses a new tunnel has to be started: minikube tunnel

Ingress affinity session max age

I am using ingress affinity session in order to keep communication between a client and a pod. Because sticky session could cause some overloading to a pod (the clients keep same pod).
I'm looking for best practices about the parameter nginx.ingress.kubernetes.io/session-cookie-max-age.
The example value is 172 800 (second) which mean 48 hours.
Why? It's a huge duration, is it possible to set up it to 30 minutes?
By the way, what happens when the application session has expired? Does the ingress rebalance the client or keep the same pod?
This is an example documentation, you don't need to use the exact values provided in it.
You can set it up to any value you want, however setting up max-age and expires to too short periods of time, backend will be rebalanced too often. This the answer to another question - yes, ingress will rebalance the client.
There are two optional attributes you can use related to its age:
Expires=<date>
Indicates the maximum lifetime of the cookie as an HTTP-date timestamp. In case of ingress, it's set up as a number.
Max-Age=<number>
Indicates the number of seconds until the cookie expires. A zero or negative number will expire the cookie immediately.
Important! If both Expires and Max-Age are set, Max-Age has precedence.
Below is a working example with cookie max-age and expires set to 30 minutes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-cookie-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "test-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "1800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "1800"
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-name
port:
number: 80
And checking that it works performing a curl request (removed unnecessary details):
$ curl -I example.com
HTTP/1.1 200 OK
Date: Mon, 14 Mar 2022 13:14:42 GMT
Set-Cookie: test-cookie=1647263683.046.104.525797|ad50b946deebe30052b8573dcb9a2339; Expires=Mon, 14-Mar-22 13:44:42 GMT; Max-Age=1800; Path=/; HttpOnly

Exposing virtual service with istio and mTLS globally enabled

I've this configuration on my service mesh:
mTLS globally enabled and meshpolicy default
simple-web deployment exposed as clusterip on port 8080
http gateway for port 80 and virtualservice routing on my service
Here the gw and vs yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # Specify the ingressgateway created for us
servers:
- port:
number: 80 # Service port to watch
name: http-gateway
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-web
spec:
gateways:
- http-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /simple-web
rewrite:
uri: /
route:
- destination:
host: simple-web
port:
number: 8080
Both vs and gw are in the same namespace.
The deployment was created and exposed with these commands:
k create deployment --image=yeasy/simple-web:latest simple-web
k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web
and with k get pods I receive this:
pod/simple-web-9ffc59b4b-n9f85 2/2 Running
What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error.
If I try to curl from ingressgateway pod I can reach the simple-web service.
Why I can't reach the website with mTLS enabled? What's the correct configuration?
As #suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.
If you chose to upgrade istio to newer version please review documentation 1.3 Upgrade Notice and Upgrade Steps as Istio is still in development and changes drastically with each version.
Also as mentioned in comments by #Manuel Castro this is most likely issue addressed in Avoid 503 errors while reconfiguring service routes and newer version simply handles them better.
Creating both the VirtualServices and DestinationRules that define the
corresponding subsets using a single kubectl call (e.g., kubectl apply
-f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e.,
Kubernetes API server) to the Pilot instances in an eventually
consistent manner. If the VirtualService using the subsets arrives
before the DestinationRule where the subsets are defined, the Envoy
configuration generated by Pilot would refer to non-existent upstream
pools. This results in HTTP 503 errors until all configuration objects
are available to Pilot.
It should be possible to avoid this issue by temporarily disabling mTLS or by using permissive mode during the deployment.
I just installed istio-1.3.2, and k8s 1.15.1, to reproduced your issue, and it worked without any modifications. This is what I did:
0.- create a namespace called istio and enable sidecar injection automatically.
1.- $ kubectl run nginx --image nginx -n istio
2.- $ kubectl expose deploy nginx --port 8080 --target-port 80 --name simple-web -n istio
3.- $kubectl craete -f gw.yaml -f vs.yaml
Note: these are your files.
The test:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:04:26 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 24 Sep 2019 14:49:10 GMT
etag: "5d8a2ce6-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4
[2019-10-11T10:04:26.101Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 6 4 "10.132.0.36" "curl/7.52.1" "4bbc2609-a928-9f79-9ae8-d6a3e32217d7" "a.b.c.d:31380" "192.168.171.73:80" outbound|8080||simple-web.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:37078 - -
And to be sure mTLS was enabled, this is from ingress-gateway describe command:
--controlPlaneAuthPolicy
MUTUAL_TLS
So, I don't know what is wrong, but you might want to go through these steps and discard things.
Note: the reason I am attacking istio gateway on port 31380 is because my k8s is on VMs right now, and I didn't want to spin up a GKE cluster for a test.
EDIT
Just deployed another deployment with your image, exposed it as simple-web-2, and worked again. May be I'm lucky with istio:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:28:45 GMT
content-type: text/html
content-length: 354
last-modified: Fri, 11 Oct 2019 10:28:46 GMT
x-envoy-upstream-service-time: 4
[2019-10-11T10:28:46.400Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 5 4 "10.132.0.36" "curl/7.52.1" "df0dd00a-875a-9ae6-bd48-acd8be1cc784" "a.b.c.d:31380" "192.168.171.65:80" outbound|8080||simple-web-2.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:42980 - -
What's your k8s environment?
EDIT2
# istioctl authn tls-check curler-6885d9fd97-vzszs simple-web.istio.svc.cluster.local -n istio
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
simple-web.istio.svc.cluster.local:8080 OK mTLS mTLS default/ default/istio-system

Does Istio really can't perform filter HTTPS requests based on the destination domains?

I've read this article about TLS Origination problem in istio. Let me quote it here:
There is a caveat to this story. In HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted, so Istio cannot know the destination domain of the encrypted requests. Well, Istio could know the destination domain by the SNI (Server Name Indication) field. This feature, however, is not yet implemented in Istio. Therefore, currently Istio cannot perform filtering of HTTPS requests based on the destination domains.
I want to understand, what does the bold statement really mean? Because, I've tried this:
Downloaded the
istio-1.0.0 here to get
the samples yaml code.
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
And apply this ServiceEntry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: cnn
spec:
hosts:
- "*.cnn.com"
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: NONE
And exec this curl command inside the pod
export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep -- curl -s -o /dev/null -D - https://edition.cnn.com/politics
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
x-servedByHost: ::ffff:172.17.128.31
access-control-allow-origin: *
cache-control: max-age=60
content-security-policy: default-src 'self' blob: https://*.cnn.com:* http://*.cnn.com:* *.cnn.io:* *.cnn.net:* *.turner.com:* *.turner.io:* *.ugdturner.com:* courageousstudio.com *.vgtf.net:*; script-src 'unsafe-eval' 'unsafe-inline' 'self' *; style-src 'unsafe-inline' 'self' blob: *; child-src 'self' blob: *; frame-src 'self' *; object-src 'self' *; img-src 'self' data: blob: *; media-src 'self' data: blob: *; font-src 'self' data: *; connect-src 'self' *; frame-ancestors 'self' https://*.cnn.com:* http://*.cnn.com https://*.cnn.io:* http://*.cnn.io:* *.turner.com:* courageousstudio.com;
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
Via: 1.1 varnish
Content-Length: 1554561
Accept-Ranges: bytes
Date: Wed, 08 Aug 2018 04:59:07 GMT
Via: 1.1 varnish
Age: 105
Connection: keep-alive
Set-Cookie: countryCode=US; Domain=.cnn.com; Path=/
Set-Cookie: geoData=mountain view|CA|94043|US|NA; Domain=.cnn.com; Path=/
Set-Cookie: tryThing00=3860; Domain=.cnn.com; Path=/; Expires=Mon Jul 01 2019 00:00:00 GMT
Set-Cookie: tryThing01=4349; Domain=.cnn.com; Path=/; Expires=Fri Mar 01 2019 00:00:00 GMT
Set-Cookie: tryThing02=4896; Domain=.cnn.com; Path=/; Expires=Wed Jan 01 2020 00:00:00 GMT
X-Served-By: cache-iad2150-IAD, cache-sin18022-SIN
X-Cache: HIT, MISS
X-Cache-Hits: 1, 0
X-Timer: S1533704347.303019,VS0,VE299
Vary: Accept-Encoding
As you can see, I can access the edition.cnn.com with HTTPS (ssl) protocol. Am I misunderstand the bold statement meaning?
The cited blog post is from January 31, 2018, and the statement was correct then. Now (1.0) Istio supports traffic routing by SNI, see https://istio.io/docs/tasks/traffic-management/egress/.
This reminds me to update that blog post, will do it by the end of this week. Sorry for the confusion, thank you for pointing to the issue.
What you're showing here is an https connection/request, and that has no reason not to work. Filtering in this case means taking a specific action (ie. deny access) based on the destination Host in http terms (the one that makes hosting multiple sites on the same server IP possible) and that is what that statement refers to.
SNI is the way to identify what Host you're connecting to, before TLS connection is established.

Kubernetes Ingress not adding the application URL for grafana dashboard

I have install the Grafan in my Kubenernetes 1.9 cluster. When I access with my ingress URL (http://sample.com/grafana/ ) getting the first page. After that javascript, css download not adding /grafana to the URL.
here is my ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress-v1
namespace: monitoring
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- sample.com
secretName: ngerss-tls
rules:
- host: sample.com
http:
paths:
- path: /grafana/
backend:
serviceName: grafana-grafana
servicePort: 80
Here I see the discussion about the same topic. but its not helping my issue.
https://github.com/kubernetes/contrib/issues/860
Below images shows first request goes to /grafana/ but second request didn't get added /grafana/ in the url.
Your ingress rule is correct and nginx creates correct virtual host to forward traffic to grafana's service (I left only needed strings to show):
server {
server_name sample.com;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location ~* ^/grafana/(?<baseuri>.*) {
set $proxy_upstream_name "default-grafana-grafana-80";
set $namespace "default";
set $ingress_name "grafana-ingress-v1";
rewrite /grafana/(.*) /$1 break;
rewrite /grafana/ / break;
proxy_pass http://default-grafana-grafana-80;
}
And yes, when you go to sample.com/grafana/ you get the response from grafana pod, but it redirects to sample.com/login page (you see this from screenshot you provided):
$ curl -v -L http://sample.com/grafana/
* Trying 192.168.99.100...
* Connected to sample.com (192.168.99.100) port 80 (#0)
> GET /grafana/ HTTP/1.1
> Host: sample.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 302 Found
< Server: nginx/1.13.5
< Date: Tue, 30 Jan 2018 21:55:21 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 29
< Connection: keep-alive
< Location: /login
< Set-Cookie: grafana_sess=c07ab2399d82fef4; Path=/; HttpOnly
< Set-Cookie: redirect_to=%252F; Path=/
<
* Ignoring the response-body
* Connection #0 to host sample.com left intact
* Issue another request to this URL: 'http://sample.com/login'
* Found bundle for host sample.com: 0x563ff9bf7f20 [can pipeline]
* Re-using existing connection! (#0) with host sample.com
* Connected to sample.com (192.168.99.100) port 80 (#0)
> GET /login HTTP/1.1
> Host: sample.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.13.5
< Date: Tue, 30 Jan 2018 21:55:21 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 21
< Connection: keep-alive
<
* Connection #0 to host sample.com left intact
default backend 404
because by default grafana's root_url is just /:
root_url = %(protocol)s://%(domain)s:%(http_port)s/
and when request redirects to just sample.com nginx forwards it to default backend 404.
Solution:
You need to change root_url grafana's server setting to /grafana/:
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
You can do this changing this setting in grafana's configmap object.
In order to serve Grafana with a prefix /grafana (e.g., http://k8s.example.com/grafana), add the following to your helm values.yaml .
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
path: /grafana/?(.*)
hosts:
- k8s.example.com
grafana.ini:
server:
root_url: http://localhost:3000/grafana # this host can be localhost
And upgrade helm grafana release as follows:
helm -n namespace_name upgrade -f values.yaml relase_name stable/grafana