I have a requirement to integrate SSO for Argo-workflow and for these we have made necessary changes in quick-start-postgres.yaml.
Here is the yaml file we are using to start argo locally.
https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
And below are the sections we are modifying to support for SSO integration
Deployment section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
spec:
selector:
matchLabels:
app: argo-server
template:
metadata:
labels:
app: argo-server
spec:
containers:
- args:
- server
- --namespaced
- --auth-mode=sso
workflow-controller-configmap section :
apiVersion: v1
data:
sso: |
# This is the root URL of the OIDC provider (required).
issuer: http://localhost:8080/auth/realms/master
# This is name of the secret and the key in it that contain OIDC client
# ID issued to the application by the provider (required).
clientId:
name: dummyClient
key: client-id
# This is name of the secret and the key in it that contain OIDC client
# secret issued to the application by the provider (required).
clientSecret:
name: jdgcFxs26SdxdpH9Z5L33QCFAmGYTzQB
key: client-secret
# This is the redirect URL supplied to the provider (required). It must
# be in the form <argo-server-root-url>/oauth2/callback. It must be
# browser-accessible.
redirectUrl: http://localhost:2746/oauth2/callback
artifactRepository: |
s3:
bucket: my-bucket
We are starting the argo by issuing below 2 commands
kubectl apply -n argo -f modified-file/quick-start-postgres.yaml
kubectl -n argo port-forward svc/argo-server 2746:2746
After executing above commands and trying to login as Single-sign on , it is not getting redirected to provide login option for keycloak user. Instead it us redirected to https://localhost:2746/oauth2/redirect?redirect=https://localhost:2746/workflows
This page isn’t working localhost is currently unable to handle this request.
HTTP ERROR 501
What could be the issue here ? are we missing anything here ??
Is there arguments needed to pass while starting the Argo?
Can someone please suggest something on this.
Try adding --auth-mode=client to your argo-server container args
Related
I have the following pod definition, (notice the explicitly set service account and secret):
apiVersion: v1
kind: Pod
metadata:
name: pod-service-account-example
labels:
name: pod-service-account-example
spec:
serviceAccountName: example-sa
containers:
- name: busybox
image: busybox:latest
command: ["sleep", "10000000"]
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: example-secret
key: secret-key-123
It successfully runs. However if I use the the same service account example-sa, and try to retrieve the example-secret it fails:
kubectl get secret example-secret
Error from server (Forbidden): secrets "example-secret" is forbidden: User "system:serviceaccount:default:example-sa" cannot get resource "secrets" in API group "" in the namespace "default"
Does RBAC not apply for pods? Why is the pod able to retrieve the secret if not?
RBAC applies to service accounts, groups, users and not to pods.When you refer a secret in the env of a pod , service account is not being used to get the secret.Kubelet is getting the secret by using its own kubernetes client credential. Since kubelet is using its own credential to get the secret it does not matter whether the service account has RBAC to get secret or not because its not used.
Service account is used when you want to invoke Kubernetes API from a pod using kubernetes standard client library or Kubectl.
Code snippet of Kubelet for reference.
I have a node app that loads its data based on domain name. domains are configured with a CNAME like app.service.com (which is the node app).
The Node app sees the request domain and sends a request to API to get app data.
for example: domain.com CNAME app.service.com
-> then node app asks api for domain.com data
the problem is setting up HTTPS (with letsencrypt) for all the domains. I think cert-manager can help but have no idea how to automate this without the need to manually change config file for each new domain.
or is there a better way to achieve this in Kubernetes?
The standard method to support more than one domain name and / or subdomain names is to use one SSL Certificate and implement SAN (Subject Alternative Names). The extra domain names are stored together in the SAN. All SSL certificates support SAN, but not all certificate authorities will issue multi-domain certificates. Let's Encrypt does support SAN so their certificates will meet your goal.
First, you have to create a job in our cluster that uses an image to run a shell script. The script will spin up an HTTP service, create the certs, and save them into a predefined secret. Your domain and email are environment variables, so be sure to fill those in:
apiVersion: batch/v1
kind: Job
metadata:
name: letsencrypt-job
labels:
app: letsencrypt
spec:
template:
metadata:
name: letsencrypt
labels:
app: letsencrypt
spec:
containers:
# Bash script that starts an http server and launches certbot
# Fork of github.com/sjenning/kube-nginx-letsencrypt
- image: quay.io/hiphipjorge/kube-nginx-letsencrypt:latest
name: letsencrypt
imagePullPolicy: Always
ports:
- name: letsencrypt
containerPort: 80
env:
- name: DOMAINS
value: kubernetes-letsencrypt.jorge.fail # Domain you want to use. CHANGE ME!
- name: EMAIL
value: jorge#runnable.com # Your email. CHANGE ME!
- name: SECRET
value: letsencrypt-certs
restartPolicy: Never
You have a job running, so you can create a service to direct traffic to this job:
apiVersion: v1
kind: Service
metadata:
name: letsencrypt
spec:
selector:
app: letsencrypt
ports:
- protocol: "TCP"
port: 80
This job will now be able to run, but you still have three things we need to do before our job actually succeeds and we’re able to access our service over HTTPs.
First, you need to create a secret for the job to actually update and store our certs. Since you don’t have any certs when we create the secret, the secret will just start empty.
apiVersion: v1
kind: Secret
metadata:
name: letsencrypt-certs
type: Opaque
# Create an empty secret (with no data) in order for the update to work
Second, you’ll have to add the secret to the Ingress controller in order for it to fetch the certs. Remember that it is the Ingress controller that knows about our host, which is why our certs need to be specified here. The addition of our secret to the Ingress controller will look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "kubernetes-demo-app-ingress-service"
spec:
tls:
- hosts:
- kubernetes-letsencrypt.jorge.fail # Your host. CHANGE ME
secretName: letsencrypt-certs # Name of the secret
rules:
Finally you have to redirect traffic through the host, down to the job, through our Nginx deployment. In order to do that you’ll add a new route and an upstream to our Nginx configuration: This could be done through the Ingress controller by adding a /.well-known/* entry and redirecting it to the letsencrypt service. That’s more complex because you would also have to add a health route to the job, so instead you’ll just redirect traffic through the Nginx deployment:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
...
# Add upstream for letsencrypt job
upstream letsencrypt {
server letsencrypt:80 max_fails=0 fail_timeout=1s;
}
server {
listen 80;
...
# Redirect all traffic in /.well-known/ to letsencrypt
location ^~ /.well-known/acme-challenge/ {
proxy_pass http://letsencrypt;
}
}
After you apply all these changes, destroy your Nginx Pod(s) in order to make sure that the ConfigMap gets updated correctly in the new Pods:
$ kubectl get pods | grep ngi | awk '{print $1}' | xargs kubectl delete pods
Make sure it works.
In order to verify that this works, you should make sure the job actually succeeded. You can do this by getting the job through kubectl, you can also check the Kubernetes dashboard.
$ kubectl get job letsencrypt-job
NAME DESIRED SUCCESSFUL AGE
letsencrypt-job 1 1 1d
You can also check the secret to make sure the certs have been properly populated. You can do this through kubectl or through the dashboard:
$ kubectl describe secret letsencrypt-certs
Name: letsencrypt-certs
Namespace: default
Labels: <none>
Annotations:
Type: Opaque
Data
====
tls.crt: 3493 bytes
tls.key: 1704 bytes
Now that as you can see that the certs have been successfully created, you can do the very last step in this whole process. For the Ingress controller to pick up the change in the secret (from having no data to having the certs), you need to update it so it gets reloaded. In order to do that, we’ll just add a timestamp as a label to the Ingress controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "kubernetes-demo-app-ingress-service"
labels:
# Timestamp used in order to force reload of the secret
last_updated: "1494099933"
...
Please take a look at: kubernetes-letsencrypt.
I'm not able to find a way to update the base URL of my keycloak gatekeeper sidecar.
My configuration works well with services set to the base URL(ex: https://monitoring.example.com/), not with a custom base path(ex: https://monitoring.example.com/prometheus).
My yaml config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus-deployment
spec:
replicas: 1
template:
metadata:
name: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:latest
args:
- '--web.external-url=https://monitoring.example.com/prometheus'
- '--web.route-prefix=/prometheus
- name: proxy
image: keycloak/keycloak-gatekeeper:5.0.0
imagePullPolicy: Always
args:
- --resource=uri=/*
- --discovery-url=https://auth.example.com/auth/realms/MYREALM
- --client-id=prometheus
- --client-secret=XXXXXXXX
- --listen=0.0.0.0:5555
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:9090/prometheus
My problem is to be able to set a different base URL path("/prometheus") for the sidecar as, when I open https://monitoring.example.com/prometheus, I receive a 307 redirection to https://monitoring.example.com/oauth/authorize?state=XXXXXXX
Whereas it should be https://monitoring.example.com/prometheus/oauth/authorize?state=XXXXXXX
I tried with the parameter "--redirection-url=https://monitoring.example.com/prometheus"
But this still redirects me to the same URL.
EDIT:
My objective is to be able to protect multiple Prometheus and restrict access to them. I'm also looking for a solution to set permission regarding the realm or the client. I mean, some of the keycloak users should be able, for example, to auth and see the content of /prometheus-dev but not /prometheus-prod.
EDIT2:
I missed the parameter 'base_uri". When I set it to "/prometheus" and try to connect to "https://monitoring.example.com/prometheus/", I receive the good redirection "https://monitoring.example.com/prometheus/oauth/authorize?state=XXXXXXX" but doesn't work. In keycloak, the log is:
"msg: no session found in request, redirecting for authorization,error:authentication session not found"
In Gatekeeper version 7.0.0 you can use one of these options:
--oauth-uri
--base-uri
But currently if you use --base-uri, then a trailing / will be added to the callback url after baseUri (i.e. /baseUri//oauth/callback). But for me it works fine with oauth-uri=/baseUri/oauth.
It can be done if you rewrite the location header on the 307 responses to the browser. If you are behind an nginx ingress add these annotations.
nginx.ingress.kubernetes.io/proxy-redirect-from: /
nginx.ingress.kubernetes.io/proxy-redirect-to: /prometheus/
I am having an issue configuring GCR with ImagePullSecrets in my deployment.yaml file. It cannot download the container due to permission
Failed to pull image "us.gcr.io/optimal-jigsaw-185903/syncope-deb": rpc error: code = Unknown desc = Error response from daemon: denied: Permission denied for "latest" from request "/v2/optimal-jigsaw-185903/syncope-deb/manifests/latest".
I am sure that I am doing something wrong but I followed this tutorial (and others like it) but with still no luck.
https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/
The pod logs are equally useless:
"syncope-deb" in pod "syncope-deployment-64479cdcf5-cng57" is waiting to start: trying and failing to pull image
My deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: syncope-deployment
namespace: default
spec:
# 3 Pods should exist at all times.
replicas: 1
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: syncope-deb
spec:
imagePullSecrets:
- name: mykey
containers:
- name: syncope-deb
# Run this image
image: us.gcr.io/optimal-jigsaw-185903/syncope-deb
ports:
- containerPort: 9080
Any I have a key in my default namespace called "mykey" that looks like (Edited out the Secure Data):
{"https://gcr.io":{"username":"_json_key","password":"{\n \"type\": \"service_account\",\n \"project_id\": \"optimal-jigsaw-185903\",\n \"private_key_id\": \"EDITED_TO_PROTECT_THE_INNOCENT\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\EDITED_TO_PROTECT_THE_INNOCENT\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"bobs-service#optimal-jigsaw-185903.iam.gserviceaccount.com\",\n \"client_id\": \"109145305665697734423\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/bobs-service%40optimal-jigsaw-185903.iam.gserviceaccount.com\"\n}","email":"redfalconinc#gmail.com","auth":"EDITED_TO_PROTECT_THE_INNOCENT"}}
I even loaded that user up with the permissions of:
Editor Cloud Container
Builder Cloud Container
Builder Editor Service
Account Actor Service
Account Admin Storage
Admin Storage Object
Admin Storage Object Creator
Storage Object Viewer
Any help would be appreciated as I am spending a lot of time on seemingly a very simple problem.
The issue is most likely caused by you using a secret of type dockerconfigjson and having valid dockercfg in it. The kubectl command changed at some point that causes this.
Can you check what it is marked as dockercfg or dockerconfigjson and then check if its valid dockerconfigjson.
The json you have provided is dockercfg (not the new format)
See https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532 for info about the formats
I am trying to create a service account with a known, fixed token used by Jenkins to deploy stuff into kubernetes. I manage to create the token all right with the following yaml:
apiVersion: v1
kind: Secret
metadata:
name: integration-secret
annotations:
kubernetes.io/service-account.name: integration
type: kubernetes.io/service-account-token
data:
token: YXNkCg== # yes this base64
Then I've attached the secret to 'integration' user and it's visible:
-> kubectl describe sa integration
Name: integration
Namespace: default
Labels: <none>
Annotations: <none>
Mountable secrets: integration-secret
integration-token-283k9
Tokens: integration-secret
integration-token-283k9
Image pull secrets: <none>
But the login fails. If i remove the data and data.token, the token get auto-created and login works. Is there something I'm missing? My goal is to have fixed token for CI so that I won't have to update it everywhere when creating a project (don't worry this is just dev environments). Is it possible for example to define username/password for service accounts for API access?
Is it possible for example to define username/password for service accounts for API access?
No, the tokens must be valid JWTs, signed by the service account token signing key.