Want to integrate argo server with keycloak - single-sign-on

I tried in incognito as wellbut same issue exists.
Currently I have added in server-deployment.yaml
args: - server - --auth-mode - sso
And in values.yaml
sso:
# #SSO configuration when SSO is specified as a server auth mode.
# #All the values are requied. SSO is activated by adding --auth-mode=sso
# #to the server command line.
#
# #The root URL of the OIDC identity provider.
issuer: http://<keycloak_ip>/auth/realms/demo
# #Name of a secret and a key in it to retrieve the app OIDC client ID from.
clientId:
name: argo
key: client-id
# #Name of a secret and a key in it to retrieve the app OIDC client secret from.
clientSecret:
name: "argo-server-sso"
key: client-secret
# # The OIDC redirect URL. Should be in the form /oauth2/callback.
redirectUrl: http:///argo/oauth2/callback
And in keycloak ui , I have created client and client credentials.
kubectl create secret generic "argo-server-sso" --from-literal=client-secret=9a9c60ba-647d-480c-b6fa-82c19caad26a
kubectl create secret generic "argo" --from-literal=client-id=argo
After hitting the argo server url,manually I need to click on login option but after that keycloak page appears and then again a popup will come "Failed to login:Unauthorized"
Server logs:
kubectl logs argo-server-5c7f8c5cbb-9fcqk
time="2021-01-20T12:06:26.876Z" level=info authModes="[sso]" baseHRef=/ managedNamespace= namespace=default secure=false
time="2021-01-20T12:06:26.877Z" level=warning msg="You are running in insecure mode. Learn how to enable transport layer security: https://argoproj.github.io/argo/tls/"
time="2021-01-20T12:06:26.877Z" level=info msg="config map" name=argo-workflow-controller-configmap
time="2021-01-20T12:06:28.318Z" level=info msg="SSO configuration" clientId="{{argo} client-id }" issuer="http://10.xx.xx.xx:xxxx/auth/realms/demo" redirectUrl="http://xx/argo/oauth2/callback"
time="2021-01-20T12:06:28.318Z" level=info msg="SSO enabled"
time="2021-01-20T12:06:28.322Z" level=info msg="Starting Argo Server" instanceID= version=v2.12.2
time="2021-01-20T12:06:28.322Z" level=info msg="Creating event controller" operationQueueSize=16 workerCount=4
time="2021-01-20T12:06:28.323Z" level=info msg="Argo Server started successfully on http://localhost:2746"
time="2021-01-20T12:07:21.990Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=GetVersion grpc.service=info.InfoService grpc.start_time="2021-01-20T12:07:21Z" grpc.time_ms=0.379 span.kind=server system=grpc
time="2021-01-20T12:07:22.009Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2021-01-20T12:07:22Z" grpc.time_ms=0.075 span.kind=server system=grpc

I integrated ArgoCD with Keycloak successfully.
You have 1 clear/visible issue: Yaml indentation is wrong
make sure you keep the right indentation as per default values in helm chart :
https://github.com/argoproj/argo-helm/blob/1aea2c41798972ff0077108f926bb9095f3f9deb/charts/argo/values.yaml#L255-L283
Accordingly, your values should be:
( assuming your argo is serving with hostname workflows.company.com )
server:
extraArgs:
- --auth-mode=sso
sso:
issuer: http://<keycloak_ip>/auth/realms/demo
clientId:
name: argo
key: client-id
clientSecret:
name: "argo-server-sso"
key: client-secret
redirectUrl: https://workflows.company.com/argo/oauth2/callback
From keycloak side now, & under your client, make sure you fill-in the Valid Redirect URL as per your ingress hostname :

Related

GitHub OAuth integration for Grafana

I'm trying to have GitHub OAuth integration for Grafana and I am following this document:
https://grafana.com/docs/grafana/latest/auth/github/
Accordingly, I've configured my values.yaml file with similar configuration as shown below:
## grafana Authentication can be enabled with the following values on grafana.ini
server:
# The full public facing url you use in browser, used for redirects and emails
root_url: https://grafana.example.space
# https://grafana.com/docs/grafana/latest/auth/github/#enable-github-in-grafana
auth.github:
enabled: true
allow_sign_up: true
scopes: user:email,read:org
auth_url: https://github.com/login/oauth/authorize
token_url: https://github.com/login/oauth/access_token
api_url: https://api.github.com/user
team_ids: 259420
allowed_organizations: Orgs
client_id: 123456
client_secret: 123456
My Github OAuth app URLs are configured as shown below:
Homepage URL:
https://grafana.example.space
Authorization callback URL:
https://grafana.example.space/login/github
But, while I try to login using the Github authentication, I can see an error as below:
t=2022-01-02T12:33:03+0000 lvl=eror msg="Failed to look up user based on cookie" logger=context error="user token not found"
t=2022-01-02T12:33:03+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/api/live/ws status=401 remote_addr=49.206.14.15 time_ms=0 size=32 referer=
Can someone please help me as to where am I missing out here?

Spring config server with vault token doesnt respect the acl defined in vault

I have spring config server and vault as backend. i created a token in vault with an acl policy . when i use the token in spring.cloud.config.token it doesnt respect the acl
My sping config client has this boot strap properties
spring:
application:
name: app1
cloud:
config:
uri: https://config-server-ur:port
token: token-associated-to-acl-policy
i created an acl policy by name "app1" which allows only the "app1" to be read by the token in vault.
path "secret/app1" {
capabilities = ["read", "list"]
}
./vault token create -display-name="app1" -policy="app1"
i used the token generated in my client and it doesnt work.
when i changed the acl policy to below, it works
path "secret/*" {
capabilities = ["read", "list"]
}
However, when i access the vault directly with X-Vault-token it works perfectly as expected
I found the solution, Set spring.cloud.config.server.vault.defaultKey to empty, like this in config-server bootstrap.yml
spring.profiles.active=git, vault
spring.cloud.config.server.git.uri=properties-git-repo-url
spring.cloud.config.server.git.username=user
spring.cloud.config.server.git.password=password
spring.cloud.config.server.git.searchPaths=/{application}/xyz
spring.cloud.config.server.git.force-pull=true
spring.cloud.config.server.git.timeout=10
spring.cloud.config.server.git.order=2
spring.cloud.config.server.vault.host=vault-hostname
spring.cloud.config.server.vault.port=8200
spring.cloud.config.server.vault.scheme=https
spring.cloud.config.server.vault.backend=secret
spring.cloud.config.server.vault.defaultKey=
spring.cloud.config.server.vault.profileSeparator=/
spring.cloud.config.server.vault.skipSslValidation=true
spring.cloud.config.server.vault.order=1
spring.cloud.config.server.vault.kvVersion=1
by default spring.cloud.config.server.vault.defaultKey= is set to "application".

Spring Cloud Vault With k2 v2 - How to Avoid 403 at Startup?

Problem
Does anyone know how to configure bootstrap.yml to tell Spring Cloud Vault to go to the correct path for k2 v2 and not try other paths first?
Details
I can successfully connect to my Vault, running k2 v2, but Spring Cloud will always try to connect to paths in the vault that don't exist, throwing a 403 on startup.
Status 403 Forbidden [secret/application]: permission denied; nested exception is org.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden
The above path, secret/application, doesn't exist because k2 v2 puts data in the path. For example: secret/data/application.
This isn't a show-stopper because Spring Cloud Vault does check other paths, including the correct one that has the data item in the path, but the fact a meaningless 403 is thrown during startup is like a splinter in my mind.
Ultimately, it does try the correct k2 v2 path
2019-03-18 12:22:46.611 INFO 77685 --- [ restartedMain] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='vault', propertySources=[LeaseAwareVaultPropertySource {name='secret/data/my-app'}
My configuration
spring.cloud.vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
default-context: my-app
application-name: my-app
host: localhost
port: 8200
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
Thanks for your help!
Add the following lines in your bootstrap.yml, this disables the generic backend
spring.cloud.vault:
generic:
enabled: false
for more information https://cloud.spring.io/spring-cloud-vault/reference/html/#vault.config.backends.generic
In addition to the accepted answer it's important to turn off (or just remove) fail-fast option:
spring.cloud.vault:
fail-fast: false
spring.cloud.vault.generic.enabled is deprecated in spring-cloud 3.0.0, but the 403 error is still there. To disable the warning (by telling spring to use the exact context), this is what I used:
spring:
config:
import: vault://
application:
name: my-application
cloud:
vault:
host: localhost
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
kv:
default-context: my-application
Other configs were set to default (such as port = 8200, backend = secret, etc.)

Not able to unseal data from Vault

I am trying to experiment the Vault HarshiCorp.
Version that I am using is Vault v0.11.0:
Starting log as below
Api Address: https://ldndsr000004893:8200
Cgo: disabled
Cluster Address: https://ldndsr000004893:8201
Listener 1: tcp (addr: "ldndsr000004893:8200", cluster address: "10.75.40.30:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v0.11.0
Version Sha: 87492f9258e0227f3717e3883c6a8be5716bf56
Server configuration as below:
listener "tcp" {
address = "ldndsr000004893:8200"
scheme = "http"
tls_disable = 1
}
#storage "inmem" {
#}
#storage "zookeeper" {
# address = "localhost:2182"
# path = "vault/"
#}
storage "file" {
path = "/app/iag/phoenix/vault/data"
}
# Advertise the non-loopback interface
api_addr = "https://ldndsr000004893:8200"
disable_mlock = true
ui=true
I have input numbers of key value pairs into vault, and was able to retrieve data normally using Vault command line. But certaintly It stopped working and not able to unseal data from both UI and commandline.
UI error :
Any advice on this issue as I am going to use Vault for storing all credential information.
Turns out it was a problem with Vault UI running on chrome browse.
I have to open a new window with incognito windows and it is showing sign in window after I keyed in the token Vault got unsealed

Error creating TLS config after updating Traefik to v1.3.6

I'm attempting to update from Traefik v1.2.3 to v1.3.6 on Kubernetes. I have my TLS certificates mounted inside of the pods from secrets. Under v1.2.3, everything works as expected. When I try to apply my v1.3.6 deployment (only change being the new docker image), the pods fail to start with the following message:
time="2017-08-22T20:27:44Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
time="2017-08-22T20:27:44Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"
Below is my traefik.toml file:
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.foo.mydomain.com.crt"
KeyFile = "/ssl/wildcard.foo.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.mydomain.com.crt"
KeyFile = "/ssl/wildcard.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.local.crt"
KeyFile = "/ssl/wildcard.local.key"
[kubernetes]
labelselector = "expose=internal"
My initial impression of the errors produced by the pods are that the keys in the secret are not valid. However, I am able to base64 decode the contents of the secret and see that the values match those of the certificate files I have stored locally. Additionally, I would expect to see this error on any version of Traefik if these were in fact, invalid. In reviewing the change log for Traefik, I see that the SSL library was updated but the related PR indicates that this only added ciphers and did not remove any previously supported.
:Edit w/ additional info:
Running with --logLevel=DEBUG provides this additional information (provided in full below in case it's helpful):
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=debug msg="Global configuration loaded {"GraceTimeOut":10000000000,"Debug":false,"CheckNewVersion":true,"AccessLogsFile":"","TraefikLogsFile":"","LogLevel":"DEBUG","EntryPoints":{"http":{"Network":"","Address":":80","TLS":null,"Redirect":{"EntryPoint":"https","Regex":"","Replacement":""},"Auth":null,"Compress":false},"https":{"Network":"","Address":":443","TLS":{"MinVersion":"","CipherSuites":null,"Certificates":[{"CertFile":"/ssl/wildcard.foo.mydomain.com.crt","KeyFile":"/ssl/wildcard.foo.mydomain.com.key"},{"CertFile":"/ssl/wildcard.mydomain.com.crt","KeyFile":"/ssl/wildcard.mydomain.com.key"},{"CertFile":"/ssl/wildcard.local.crt","KeyFile":"/ssl/wildcard.local.key"}],"ClientCAFiles":null},"Redirect":null,"Auth":null,"Compress":false}},"Cluster":null,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http","https"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":180000000000,"InsecureSkipVerify":false,"Retry":null,"HealthCheck":{"Interval":30000000000},"Docker":null,"File":null,"Web":{"Address":":8080","CertFile":"","KeyFile":"","ReadOnly":false,"Statistics":null,"Metrics":{"Prometheus":{"Buckets":[0.1,0.3,1.2,5]}},"Path":"","Auth":null},"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":{"Watch":true,"Filename":"","Constraints":[],"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"Namespaces":null,"LabelSelector":"expose=internal"},"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc42060d800 Redirect:<nil> Auth:<nil> Compress:false}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"
This issue turned out to be new validation logic in the crypto/tls library in Go 1.8. They are now validating the certificate blocks end in ----- where as before they did not. The private key for one of my certificate files ended in ---- (missing a hyphen). Adding the missing character fixed this issue.