Why can't I login to Grafana with Keycloak integration? - keycloak

I'm facing issue with Keycloak integration in Grafana:
With this grafana.ini:
instance_name = grafana
[log]
level = error
[server]
; domain = host.docker.internal
root_url = http://localhost:13000
enforce_domain = false
enable_gzip = true
[security]
admin_user = admin
admin_password = admin
[auth.generic_oauth]
name = OAuth
enabled = true
client_id = grafana
; client_secret = CLIENT_SECRET_FROM_KEYCLOAK
client_secret = <my client secret>
scopes = openid profile roles
; email_attribute_name = email:primary
auth_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/auth
token_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/token
api_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/userinfo
allow_sign_up = false
disable_login_form = true
oauth_auto_login = true
tls_skip_verify_insecure = true
; Roles from Client roles in Keycloak
role_attribute_path = contains(resource_access.grafana.roles[*], 'Admin') && 'Admin' || contains(resource_access.grafana.roles[*], 'Editor') && 'Editor' || 'Viewer'
I can be redirected to Keycloak login page, but after login grafana has this error:
t=2021-10-15T11:48:58+0000 lvl=eror msg=login.OAuthLogin(NewTransportWithCode) logger=context userId=0 orgId=0 uname= error="oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"Code not valid\"}"
t=2021-10-15T11:48:58+0000 lvl=eror msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/login/generic_oauth status=500 remote_addr=172.18.0.1 time_ms=647 size=733 referer=
Keycloak configuration for grafana client:
What happens? What I am missing from configuration?
EDIT:
Grafana URL: http://localhost:13000
Keycloak logs:
16:38:09,650 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-4) Uncaught server error: java.lang.RuntimeException: cannot map type for token claim
...
...
16:38:09,942 WARN [org.keycloak.protocol.oidc.utils.OAuth2CodeParser] (default task-4) Code 'f72beb89-f814-4993-aa8f-e8debfea41ae' already used for userSession '6de1f56b-9c61-42ae-86bd-66d0ac7ad751' and client '36930d87-854f-414a-8177-c8237edf805c'.
16:38:09,944 WARN [org.keycloak.events] (default task-4) type=CODE_TO_TOKEN_ERROR, realmId=mcs, clientId=grafana, userId=null, ipAddress=172.16.1.1, error=invalid_code, grant_type=authorization_code, code_id=6de1f56b-9c61-42ae-86bd-66d0ac7ad751, client_auth_method=client-secret

Related

GitHub Enterprise authentication with Grafana not working

I try to set up GitHub authentication with Grafana. But I always receive this error:
{
"message": "API rate limit exceeded for 10.135.245.121. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"documentation_url": "https://docs.github.com/enterprise/3.5/rest/overview/resources-in-the-rest-api#rate-limiting"
}
These are the logs after trying to sign in:
[ssm-user#ip-100-73-25-174 bin]$ sudo tail /var/log/grafana/grafana.log
logger=token t=2022-07-28T09:21:31.380404222Z level=debug msg=FeatureEnabled feature=accesscontrol.enforcement enabled=false licenseStatus=NotFound hasLicense=false hasValidLicense=false products="unsupported value type"
logger=token t=2022-07-28T09:21:31.380432476Z level=debug msg=FeatureEnabled feature=whitelabeling enabled=false licenseStatus=NotFound hasLicense=false hasValidLicense=false products="unsupported value type"
logger=live t=2022-07-28T09:21:31.429797544Z level=debug msg="Client disconnected" user=0 client=2cf47de6-e7b1-4d64-bf1d-0baab6ec9e97 reason=normal elapsed=2.707161792s
logger=token t=2022-07-28T09:21:31.679855249Z level=debug msg=FeatureEnabled feature=accesscontrol.enforcement enabled=false licenseStatus=NotFound hasLicense=false hasValidLicense=false products="unsupported value type"
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-07-28T09:21:31.680567052Z level=info msg="Request Completed" method=GET path=/api/live/ws status=0 remote_addr=10.170.171.10 time_ms=0 duration=968.313µs size=0 referer= traceID=00000000000000000000000000000000
logger=live t=2022-07-28T09:21:31.714286816Z level=debug msg="Client connected" user=0 client=53a59f12-9c47-4c40-a57d-58a684058759
logger=token t=2022-07-28T09:21:33.705344746Z level=debug msg=FeatureEnabled feature=accesscontrol.enforcement enabled=false licenseStatus=NotFound hasLicense=false hasValidLicense=false products="unsupported value type"
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-07-28T09:21:33.705978495Z level=info msg="Request Completed" method=GET path=/login/github status=302 remote_addr=10.170.171.10 time_ms=0 duration=899.188µs size=349 referer=https://users.tfe-nonprod.aws-cloud.axa-de.intraxa/grafana/login traceID=00000000000000000000000000000000
logger=live t=2022-07-28T09:21:34.018133518Z level=debug msg="Client disconnected" user=0 client=53a59f12-9c47-4c40-a57d-58a684058759 reason=normal elapsed=2.30377808s
logger=ngalert t=2022-07-28T09:21:37.530175528Z level=debug msg="alert rules fetched" count=0 disabled_orgs="unsupported value type"
My config:
# grafana.ini
[auth.github]
enabled = true
allow_sign_up = false
client_id = fea52015e8d3d4543276
client_secret = 3b53b35a0ea769e2e68e5769b6a4d142a40d023a
scopes = user:email,read:org
auth_url = https://github.axa.com/api/v3/login/oauth/authorize
token_url = https://github.axa.com/api/v3/login/oauth/access_token
api_url = https://github.axa.com/api/v3
team_ids =
allowed_organizations =
My ouath app config:
What I can also see is that the client secret is never used.

Hashicorp Vault won't let me delete a Policy even using the root token

I am trying to delete a policy.
After logging in with the root token, I do the following:
$ vault policy delete testttt
Error deleting testttt: Error making API request.
URL: DELETE https://vault.local:8200/v1/sys/policies/acl/testttt
Code: 400. Errors:
* failed to delete policy: AccessDenied: Access Denied
status code: 403, request id: VB6YWECETDJ5KB7Q, host id:
S0FJvs41pSbzTmP1lDr/aVSOPjeRVz4Vk/ofkFHu8jvNjfzk6ARnY33qzP/usqmpVDExwLlsF44=
My config file looks like this:
storage "s3" {
access_key = "XXXX"
secret_key = "XXXX"
bucket = "XXXX-vault"
region = "eu-central-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault.d/fullchain.pem"
tls_key_file = "/etc/vault.d/privkey.pem"
}
api_addr = "http://0.0.0.0:8200"
cluster_addr = "https://0.0.0.0:8201"
ui = true
Something seems totally off, as after using the root token in the UI, I also just see this:
null is not an object (evaluating 'l.userRootNamespace')

Vault server token login doesn't work as per lease time

We are using Hashicorp Vault with Consul and Filesystem as storage, facing issue on login every time my token duration is infinity but still ask for token need to be sealed for every few hours with again login?
config.hcl:
`ui = true
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
backend "file" {
path = "/mnt/vault/data"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}`
Token lookup:
Key Value
• -----
token **********
token_accessor ***********
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

Failed to start Kafka after enabling Kerberos "SASL authentication failed"

Kafka version: kafka_2.1.1(binary)
When I enable the Kerberos I follow the official documents(https://kafka.apache.org/documentation/#security_sasl_kerberos) closely.
When I start the Kafka, I got the following errors:
[2019-02-23 08:55:44,622] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
[2019-02-23 08:55:44,625] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-02-23 08:55:44,746] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I use almost the default krb5.conf.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
# kdc = kerberos.example.com
# admin_server = kerberos.example.com
kdc = localhost
admin_server = localhost
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
The jaas file I passed to the Kafka is as below:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
I also set the ENV as below:
"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dzookeeper.sasl.client.username=kafka"
I have googled a lot of posts but without any progress. I guess the problem may be the "localhost" I use when I create entries in Kerberos. But I'm not quite sure how to workaround. The goal for me is to setup a local Kafka+Kerberos testing environment.
In our case, the krb5 kerberos_config file wasn't read properly. if you're using keytab thru' yml then it'd need to be removed first. This was with IBM JDK though and had to use the following to set System.setProperty("java.security.auth.login.config", JaasConfigFileLocation);
KafkaClient {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=false
credsType=both
principal="xkafka#xxx.NET"
useKeytab="/opt/apps/xxxr/my.keytab";
};

VAULT_CLIENT_TOKEN keeps expiring every 24h

Environment:
Vault + Consul, all latest. Integrating Concourse (3.14.0) with Vault. All tokens and keys are throw-away. This is just a test cluster.
Problem:
No matter what I do, I get 768h as the token_duration value. Also, overnight my approle token keeps expiring no matter what I do. I have to regenerate token and pass it to Concourse and restart the service. I want this token not to expire.
[root#k1 etc]# vault write auth/approle/login role_id="34b73748-7e77-f6ec-c5fd-90c24a5a98f3" secret_id="80cc55f1-bb8b-e96c-78b0-fe61b243832d" duration=0
Key Value
--- -----
token 9a6900b7-062d-753f-131c-a2ac7eb040f1
token_accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
token_duration 768h
token_renewable true
token_policies ["concourse" "default"]
identity_policies []
policies ["concourse" "default"]
token_meta_role_name concourse
[root#k1 etc]#
So, I use token - 9a6900b7-062d-753f-131c-a2ac7eb040f1 for my Concourse to access secrets and all is good, until 24h later. It gets expired.
I set duration to 0, but It didn't help.
$ vault write auth/approle/role/concourse secret_id_ttl=0 period=0 policies=concourse secret_id_num_uses=0 token_num_uses=0
My modified vaultconfig.hcl looks like this:
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
token = "95FBC040-C484-4D16-B489-AA732DB6ADF1"
#token = "0b4bc7c7-7eb0-4060-4811-5f9a7185aa6f"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_min_version = "tls10"
tls_disable = 1
}
cluster_addr = "http://192.168.163.132:8201"
api_addr = "http://192.168.163.132:8200"
disable_mlock = true
disable_cache = true
ui = true
default_lease_ttl = 0
cluster_name = "testcluster"
raw_storage_endpoint = true
My Concourse policy is vanilla:
[root#k1 etc]# vault policy read concourse
path "concourse/*" {
policy = "read"
capabilities = ["read", "list"]
}
[root#k1 etc]#
Look up token - 9a6900b7-062d-753f-131c-a2ac7eb040f1
[root#k1 etc]# vault token lookup 9a6900b7-062d-753f-131c-a2ac7eb040f1
Key Value
--- -----
accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
creation_time 1532521379
creation_ttl 2764800
display_name approle
entity_id 11a0d4ac-10aa-0d62-2385-9e8071fc4185
expire_time 2018-08-26T07:22:59.764692652-05:00
explicit_max_ttl 0
id 9a6900b7-062d-753f-131c-a2ac7eb040f1
issue_time 2018-07-25T07:22:59.238050234-05:00
last_renewal 2018-07-25T07:24:44.764692842-05:00
last_renewal_time 1532521484
meta map[role_name:concourse]
num_uses 0
orphan true
path auth/approle/login
policies [concourse default]
renewable true
ttl 2763645
[root#k1 etc]#
Any pointers, feedback is very appreciated.
Try setting the token_ttl and token_max_ttl parameters instead of the secret_id_ttl when creating the new AppRole.
You should also check your Vault default_lease_ttl and max_lease_ttl, they might be set to 24h