Keycloak Scope Parameter Support : Cannot send param in scope - keycloak

Problem: Read query parameters in custom token mapper to insert attribute-based access controls inside claims.
Solution: Pass variable in scope parameters. Read it inside the custom token mapper and use it to fetch additional claims.
Question 1: I am not able to pass a variable in the scope Parameter. Without a variable defined, the postman's flow works fine but that is of no use to me.
The newly added scope named resourceId is shown in the list of supported scopes:
Postman request without parameter value works fine:
Postman request FAILS with parameterized scope
Please advice.
I was referring to these posts from the keycloak community:
https://lists.jboss.org/pipermail/keycloak-user/2019-May/018393.html
https://issues.redhat.com/browse/KEYCLOAK-349
Question 2 : What happenes for recurring request calls when authentication is done? Explained here in detail: https://lists.jboss.org/pipermail/keycloak-dev/2019-January/011545.html
Update 1:
Even with the curl command I am facing the same issue. If I pass this let's say
curl -d 'client_id=my-cim-client' -d 'username=admin' -d 'password=admin' -d 'grant_type=client_credentials' -d 'client_secret=XXXXX' -d 'scope=openid resourceId:123' 'http://localhost:8180/realms/oauth-my-realm/protocol/openid-connect/token'
I get the following output
{"error":"invalid_scope","error_description":"Invalid scopes: openid resourceId:123"}%

Related

Keycloak: cannot get token from a custom spi

I have to create two rest services via keycloak.
The first one sends a verification code to a phone number. The second one grant a token to a user if the verification code is correct for a given phone number.
I have created a module with a custom SPI following the guide in https://github.com/FX-HAO/keycloak-phone-authenticator. The provider can be found. I have also created the Direct grant flow copy and made it the default direct grant flow for the realm.
I can send the verification code with a request to http://{host}//auth/realms/{my_realm}/{my_provider}/send_sms
However, I cannot get the token using the following request:
curl -X POST http://{host}/auth/realms/{my realm}/protocol/openid-connect/token
-H 'authorization: Basic {my keycloak admin username and password}'
-H 'content-type: application/x-www-form-urlencoded' -d 'grant_type=password&phone_number={phone number}&code={code}'
I keep getting the invalid_client_credentials error and it seems that my provider is not called because there is nothing in its logs.
What am I doing wrong?
As #sventorben said, the problem was in specifying wrong credentials for the client

Accessing Concourse REST API from resource

I am trying to write a custom Concourse resource (in Python) that accesses the Concourse instance's REST API for information. I'm stuck at obtaining the bearer token at login. The issue is that when I follow the gist of this shell script
#!/bin/bash
## Variables required #need to update these to take inputs for getting token per team and target.
CONCOURSE_URL="http://localhost:8080"
CONCOURSE_USER="test"
CONCOURSE_PASSWORD="test"
CONCOURSE_TEAM="test"
CONCOURSE_TARGET="my-concourse"
function get_token() {
## Create a file named token that will be used to read and write tokens
touch token
## extract the LDAP authentication url and write to token file
LOCAL_AUTH_URL=$CONCOURSE_URL$(curl -b token -c token -L "$CONCOURSE_URL/sky/login" -s | grep "/sky/issuer/auth/local" | awk -F'"' '{print $4}')
echo "url is $LOCAL_AUTH_URL"
# login using username and password while writing to the token file
curl -s -o /dev/null -b token -c token -L --data-urlencode "login=$CONCOURSE_USER" --data-urlencode "password=$CONCOURSE_PASSWORD" "$LOCAL_AUTH_URL"
ATC_BEARER_TOKEN=`grep 'Bearer' token | cut -d\ -f2 | sed 's/"$//'`
echo $ATC_BEARER_TOKEN
}
there are many redirects involved, and at least some of them refer to the concourse instance as being at http://localhost:8080, which does not work from inside the docker container of the resource.
So I wanted to parametrize the external base url and explicitly give it in resource config. Manually handling the redirects and rewriting the local IP into the URL fails at the last "approval" step with a code 400, probably because it looks like some kind of a cross-domain attack.
The environment variable ATC_EXTERNAL_URL is always localhost:8080 and I suspect that this is also used when forming out the redirect urls. Can this be set somewhere?
I'm bad at golang, but it seems to me that https://github.com/concourse/concourse-pipeline-resource calls the fly binary to achieve some kind of login from inside a resource. Can't say I can get what it does and how.
All help appreciated...
The env var $ATC_EXTERNAL_URL most likely corresponds to the external url specified when you start Concourse, so yes, it can (and if you're using external auth like Github or OAuth, must) be changed. You're correct in assuming that it's used to construct callback URLs.
Also, I don't want to be That Guy(TM), but the Concourse REST API is not public and is subject to change at any time. What are you trying to do that you can't get from the fly CLI? Your resource could call the ATC_EXTERNAL_URL to get the fly CLI when it's needed then execute commands that way.

"Access Denied. Provided scope(s) are not authorized" error when trying to make objects public using the REST API

I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:
Access denied. Provided scope(s) are not authorized.
Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).
Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.
Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?
Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache
When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.
Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.
And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:
gcloud beta compute instances set-scopes $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$COMPUTE_ZONE \
--scopes=https://www.googleapis.com/auth/devstorage.read_write \
--service-account=$SERVICE_ACCOUNT
One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.
Follow the documentation you provided, taking into account these points:
Access Control system for the bucket has to be Fine-grained (not uniform).
In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.
Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.
Create the json file as indicated in the documentation.
Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:
curl -X POST --data-binary #JSON_FILE_NAME.json \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
"https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"
You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes
Either select "Allow full access to all Cloud APIs" or select the fine-grained approach
So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.

Instance environment variables

I have several Google Compute Engine instances, and have set instance metadata on each, under the assumption these are available on the instance itself as an environment variable, but they don't show up. I then read here that I need to query the metadata server for this data, but that just returns a 403 unauthorized when run from the instance itself. Is there a way to access metadata as environment variables?
It may be worth studying Metadata querying a bit more, but my guess is that you are attempting to get custom metadata, which is resulting in it not being found. Make sure you are using the attributes directory to access any custom metadata.
For example, this will get the built-in tags metadata:
curl "http://metadata.google.internal/computeMetadata/v1/instance/tags" \
-H "Metadata-Flavor: Google"
while this will get your custom metadata for attribute foo:
curl "http://metadata.google.internal/computeMetadata/v1/<instance|project>/attributes/foo" \
-H "Metadata-Flavor: Google"

Github v3 API - create a REPO

I’m trying to use the Github v3 API - I already implemented the required OAuth flow and it works well.
Now I’m trying some of the Repos API endpoints (http://developer.github.com/v3/repos/).
So far, I’m able to get a List of my repos using: GET /user/repos
However, when I try to create a repo using POST /user/repos, I get a 404.
Any thoughts what I might be doing wrong?
Joubert
Can you please tell us how exactly you did the HTTP request? The 404 sounds like you were using a wrong path, probably. But to give a reliable answer instead a wild guess, we need to see your request, including how you are sending your token, just mask it with 'xxx' or something.
I'll show you in the meantime an example request, that is working:
curl -XPOST -H 'Authorization: token S3CR3T' https://api.github.com/user/repos -d '{"name":"my-new-repo","description":"my new repo description"}'
You would need to replace the OAuth token of course: S3CR3T
I had the same issue. The reason why you are getting a 404 with your oauth access token is that when you authorize to github you need to also additionally pass the scopes you want. For example, in the header you should see "X-OAuth-Scopes: repo, user", which means this user has read/write access to his profile and repositories. Once you have set the correct scopes you should be able to do POST/PUT requests just fine.
To see whether or not you have the correct permissions. You can do something like the following. Substitute the XXXXXXX with your access token.
curl -I https://api.github.com/user?access_token=XXXXXXXX
For creating repositories as a user you can use an personal access token and basic auth, which can be much simpler when you are fluffing around on the command line and have 2FA enabled.
curl -d '{"name":"test"}' -u githubuser:personaccesstoken https://api.github.com/user/repos
Create a personal access token here https://github.com/settings/tokens and make sure it has the 'repo' scope.
This script lets you read in in the token and project name as variables so you can use it in a script
#!/usr/bin/env bash -u
#
TOKEN=`cat token_file`
PROJECT=myproject
curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d '{"name": "'"$PROJECT"'"}' https://api.github.com/user/repos?access_token=$TOKEN