I'm not able to find a way to update the base URL of my keycloak gatekeeper sidecar.
My configuration works well with services set to the base URL(ex: https://monitoring.example.com/), not with a custom base path(ex: https://monitoring.example.com/prometheus).
My yaml config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus-deployment
spec:
replicas: 1
template:
metadata:
name: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:latest
args:
- '--web.external-url=https://monitoring.example.com/prometheus'
- '--web.route-prefix=/prometheus
- name: proxy
image: keycloak/keycloak-gatekeeper:5.0.0
imagePullPolicy: Always
args:
- --resource=uri=/*
- --discovery-url=https://auth.example.com/auth/realms/MYREALM
- --client-id=prometheus
- --client-secret=XXXXXXXX
- --listen=0.0.0.0:5555
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:9090/prometheus
My problem is to be able to set a different base URL path("/prometheus") for the sidecar as, when I open https://monitoring.example.com/prometheus, I receive a 307 redirection to https://monitoring.example.com/oauth/authorize?state=XXXXXXX
Whereas it should be https://monitoring.example.com/prometheus/oauth/authorize?state=XXXXXXX
I tried with the parameter "--redirection-url=https://monitoring.example.com/prometheus"
But this still redirects me to the same URL.
EDIT:
My objective is to be able to protect multiple Prometheus and restrict access to them. I'm also looking for a solution to set permission regarding the realm or the client. I mean, some of the keycloak users should be able, for example, to auth and see the content of /prometheus-dev but not /prometheus-prod.
EDIT2:
I missed the parameter 'base_uri". When I set it to "/prometheus" and try to connect to "https://monitoring.example.com/prometheus/", I receive the good redirection "https://monitoring.example.com/prometheus/oauth/authorize?state=XXXXXXX" but doesn't work. In keycloak, the log is:
"msg: no session found in request, redirecting for authorization,error:authentication session not found"
In Gatekeeper version 7.0.0 you can use one of these options:
--oauth-uri
--base-uri
But currently if you use --base-uri, then a trailing / will be added to the callback url after baseUri (i.e. /baseUri//oauth/callback). But for me it works fine with oauth-uri=/baseUri/oauth.
It can be done if you rewrite the location header on the 307 responses to the browser. If you are behind an nginx ingress add these annotations.
nginx.ingress.kubernetes.io/proxy-redirect-from: /
nginx.ingress.kubernetes.io/proxy-redirect-to: /prometheus/
Related
I have a requirement to integrate SSO for Argo-workflow and for these we have made necessary changes in quick-start-postgres.yaml.
Here is the yaml file we are using to start argo locally.
https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
And below are the sections we are modifying to support for SSO integration
Deployment section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
spec:
selector:
matchLabels:
app: argo-server
template:
metadata:
labels:
app: argo-server
spec:
containers:
- args:
- server
- --namespaced
- --auth-mode=sso
workflow-controller-configmap section :
apiVersion: v1
data:
sso: |
# This is the root URL of the OIDC provider (required).
issuer: http://localhost:8080/auth/realms/master
# This is name of the secret and the key in it that contain OIDC client
# ID issued to the application by the provider (required).
clientId:
name: dummyClient
key: client-id
# This is name of the secret and the key in it that contain OIDC client
# secret issued to the application by the provider (required).
clientSecret:
name: jdgcFxs26SdxdpH9Z5L33QCFAmGYTzQB
key: client-secret
# This is the redirect URL supplied to the provider (required). It must
# be in the form <argo-server-root-url>/oauth2/callback. It must be
# browser-accessible.
redirectUrl: http://localhost:2746/oauth2/callback
artifactRepository: |
s3:
bucket: my-bucket
We are starting the argo by issuing below 2 commands
kubectl apply -n argo -f modified-file/quick-start-postgres.yaml
kubectl -n argo port-forward svc/argo-server 2746:2746
After executing above commands and trying to login as Single-sign on , it is not getting redirected to provide login option for keycloak user. Instead it us redirected to https://localhost:2746/oauth2/redirect?redirect=https://localhost:2746/workflows
This page isn’t working localhost is currently unable to handle this request.
HTTP ERROR 501
What could be the issue here ? are we missing anything here ??
Is there arguments needed to pass while starting the Argo?
Can someone please suggest something on this.
Try adding --auth-mode=client to your argo-server container args
I have an API and a GUI application.
The API is a simple Python FastAPI app with one route. The GUI is a React app that is served via nginx.
So I have a working local cluster with those two apps. Each has a service. GUI additionally has an Ingress. Using k3d, I am able to get to the GUI app, because I start the cluster with: k3d cluster create -p "8081:80#loadbalancer", so I can go to localhost:8081 and I see it. The Ingress config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-gui-ingress
annotations:
external-dns/manage-entries: "true"
external-dns.alpha.kubernetes.io/ttl: "300"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foo-gui-service
port:
number: 80
Service configuration are pretty much the same, just different names:
apiVersion: v1
kind: Service
metadata:
name: foo-gui-service
labels:
app.kubernetes.io/name: foo-gui
app.kubernetes.io/component: server
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/part-of: foo-gui
spec:
selector:
app: foo-gui
component: foo-gui
ports:
- protocol: TCP
port: 80
targetPort: 80
GUI makes calls to the API. The API Service is called foo-api-service, so I want to make calls to http://foo-api-service:80/<route>. It works in a temporary pod that I install for debugging network issues (I can curl the API). It even works when I attach to the GUI Pod.
The problem is: it doesn't work on the GUI. I receive the following error in the dev console in the browser: POST http://foo-api-service/test net::ERR_NAME_NOT_RESOLVED. It looks like the http://foo-api-service is being resolved in my networking, not in the cluster's.
My question is: Is there a way to overcome that? To have http://foo-api-service calls to resolve this name inside the local cluster? Or should I provide Ingress for the API as well and use the domain name from API's Ingress as a URL in the GUI app?
I'm messing around with kubernetes and I've set up a kluster on my local pc using kind. I have also installed traefik as an ingress controller, and I have already managed to access an api that I have deployed in the kluster and a grafana through some ingress (without doing port forwards or anything like that). But with mongo I can't. While the API and grafana need an IngressRoute, mongo needs an IngressRouteTCP
The IngressRouteTCP that I have defined is this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongodb-ingress-tcp
namespace: mongo_namespace
spec:
entryPoints:
- web
routes:
- match: HostSNI(`mongo.localhost`)
services:
- name: mongodb
port: 27017
But I get this error:
I know i can use a port forward, but I want do in this way (with ingress)
A lot of thanks
You need to specify tls parameters
like this
tls:
certResolver: "bar"
domains:
- main: "snitest.com"
sans:
- "*.snitest.com"
To avoid using TLS u need to match all routes HostSNI(`*`)
It is important to note that the Server Name Indication is an extension of the TLS protocol.
Hence, only TLS routers will be able to specify a domain name with that rule.
However, there is one special use case for HostSNI with non-TLS routers: when one wants a non-TLS router that matches all (non-TLS) requests, one should use the specific HostSNI(`*`) syntax.
DOCS
I am having the following issue.
I am new to GCP/Cloud, I have created a cluster in GKE and deployed our application there, installed nginx as a POD in the cluster, our company has a authorized SSL certificate which i have uploaded in Certificates in GCP.
In the DNS Service, i have created an A record which matched the IP of Ingress.
When i call the URL in the browser, it still shows that the website is still unsecure with message "Kubernetes Ingress controller fake certificate".
I used the following guide https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#console_1
however i am not able to execute step 3 "Associate an SSL certificate with a target proxy", because it asks "URL Maps" and i am not able to find it in the GCP Console.
Has anybody gone through the same issue like me or if anybody helps me out, it would be great.
Thanks and regards,
I was able to fix this problem by adding an extra argument to the ingress-nginx-controller deployment.
For context: my TLS secret was at the default namespace and was named letsencrypt-secret-prod, so I wanted to add this as the default SSL certificate for the Nginx controller.
My first solution was to edit the deployment.yaml of the Nginx controller and add at the end of the containers[0].args list the following line:
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
Which made that section of the yaml look like this:
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.0-beta.0#sha256:92115f5062568ebbcd450cd2cf9bffdef8df9fc61e7d5868ba8a7c9d773e0961
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
But I was using the helm chart: ingress-nginx/ingress-nginx, so I wanted this config to be in the values.yaml file of that chart so that I could upgrade it later if necessary.
So reading the values file I replaced the attribute: controller.extraArgs, which looked like this:
extraArgs: {}
For this:
extraArgs:
default-ssl-certificate: default/letsencrypt-secret-prod
This restarted the deployment with the argument in the correct place.
Now I can use ingresses without specifying the tls.secretName for each of them, which is awesome.
Here's an example ingress that is working for me with HTTPS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- http:
paths:
- path: /some-prefix
pathType: Prefix
backend:
service:
name: some-service-name
port:
number: 80
You can save your SSL/TLS certificate into the K8s secret and attach it to the ingress.
you need to config the TLS block in ingress, dont forget to add ingress.class details in ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
rules:
- host: mydomain.com
http:
paths:
-
backend:
serviceName: my-service
servicePort: 80
path: /
tls:
- hosts:
- mydomain.com
secretName: my-tls-secret
You can read more at : https://medium.com/avmconsulting-blog/how-to-secure-applications-on-kubernetes-ssl-tls-certificates-8f7f5751d788
You might be seeing something like this in browser :
that's from the ingress controller and wrong certificate attached to ingress or ingress controller default fake cert.
in my fault, I upgraded new tsl on cattle-system name space, but not in my name-space, therefore some how, ingress recognize with K8s fake cert.
Solution: upgrade all old cert to new cert (only ingress user cert, WARNING - system cert maybe damaged your system - cannot explain)
Also don't forget to check that Subject Alternative Name actually contains the same value the CN contains. If it does not, the certificate is not valid because the industry moves away from CN. Just learned this now
I have a few microservices running in a kubernetes cluster. Each service has its own APIs and corresponding swagger.json.
I'm considering deploying a swagger-ui pod inside kubernetes to show these swagger.json and execute the APIs.
Tried it and realized that:
swagger-ui can't find the services' swagger.jsons. Even though the swagger-ui pod can resolve the serives' dns name, but my browser can't. Seems the code is running in browser instead of the pod.
For the same reason that it's browser running the code, the swagger-ui can't be used to execute the APIs, since the services are not reachable from outside kubernetes.
So my question is,
is there a way to let swagger-ui run code inside the pod? so that it can reach the services and execute their apis?
is there ANY way to execute the kubernetes services' apis via webui, if we don't use swagger-ui?
Thanks a lot!
is there a way to let swagger-ui run code inside the pod? so that it
can reach the services and execute their apis?
You can expose or share the file of JSON to swagger POD and swagger UI can server those files, that's one way possible. However it's not good idea to set the RWM (Read-write many) setting up the NFS inside K8s and share the pods file system across each other.
is there ANY way to execute the kubernetes services' apis via webui,
if we don't use swagger-ui?
You can also try the ReDoc : https://github.com/Redocly/redoc
which is similar to swagger also in below github repo there is also example available for both.
To run the swagger inside the Kubernetes you can try
Two ways swagger on getting file either from the file system or from the URL.
For example, if you are looking forward to running the deployment of swagger UI.
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-ui
labels:
app: swagger-ui
spec:
replicas: 1
selector:
matchLabels:
app: swagger-ui
template:
metadata:
labels:
app: swagger-ui
spec:
containers:
- name: swagger-ui
image: swaggerapi/swagger-ui #build new image for adding local swaggerfile
ports:
- containerPort: 8080
env:
- name: BASE_URL
value: /
- name: API_URLS
value: >-
[
{url:'https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/petstore.yaml',name:'Pet Store Example'},
{url:'https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/uspto.yaml',name:'USPTO'},
{url:'/example-swagger.yaml',name:'Local Example'}
]
https://github.com/harsh4870/Central-API-Documentation-kubernetes
You can also read about the newman : https://azevedorafaela.com/2018/12/18/how-to-test-internal-microservices-in-a-kubernetes-cluster/
I solved it here: https://stackoverflow.com/a/68798178/12877180
Executing kubernetes services by using their names via webui can be achived by using nginx reverse proxy mechanizm.
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
This mechanizm will redirect your API request invoked from the browser level to internal cluster service name where the nginx server is running.
An example nginx configuration
server {
listen 80;
location /api/ {
proxy_pass http://<service-name>.<namespace>:<port>;
}
location / {
root /usr/share/nginx/html;
index index.html;
add_header 'Access-Control-Allow-Origin' '*';
try_files $uri $uri/ /index.html =404;
}
}
In above example all your .../api/... calls will be redirect to http://<service-name>.<namespace>:<port>/api/... endpoint.
Unfortunately i dont't know how to achive the same goal with the Swagger-UI
NOTE
If the proxy_pass url would end with '/' the location url ex. '/api/' would not be added to desired url.