How to implement cross-account RBAC using Cognito User groups and API Gateway? - aws-api-gateway

I have 2 AWS accounts.The Front end along with cognito is hosted in Account 1 and the backend with the API GW is hosted in Account 2. I want to setup RBAC to prevent the users in the Cognito group to 'DELETE' API's using cognito groups. I have created a permission policy as below and attached it to a Role and then attached the Role to the Cognito group. I have then created a Authoriser for the API GW in Account 2 using the Cognito user pool available in Account 1 and then attached the Authoriser to the API's Delete Method Request.
Deny Policy, where I have replaced the resource parameters with my account/API details:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path"
]
}
]
}
But when I try to delete the API, I am still able to successfully delete it. But I expect to get unauthorised as per the setup. I am able to see the Cognito user group details when I decode the token response, so my guess is the Cognito call is happening properly with API GW, but the Role/Deny Policy attached is not being enforced. Can someone please help me know what I am doing wrong, since this is cross account do I have to do something else with the IAM Role I have attached to the Cognito group or is there a issue with the Policy I am using?

Related

Problem regarding google cloud bucket access permission

I am working on a colab project with google cloud bucket. At first, I use my own Gmail account A, but I notice that I need a google service account for some operations. So I activate a service account B and I successfully log in with this service account.
But here are still a permission error:
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "gmailaccountA#gmail.com does not have storage.objects.list access to the Google Cloud Storage bucket.",
"errors": [
{
"message": "gmailaccountA#gmail.com does not have storage.objects.list access to the Google Cloud Storage bucket.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
When I double check and run the "gcloud auth list", I get two active accounts, one is my gmail account A and one is my service account B. How could I make sure I am using the service account?
In order to set the account you want to use you can first list them to check the ones available using
gcloud auth list
and to set the chosen one use:
gcloud config set account ACCOUNT
You can read more about the gcloud config set command and its properties here

Add User as owner of Azure AD Group through REST API

Is it possible to add the owner to the Azure AD group from any REST API?
I think I should have one service principal and I have to generate an access token to do that.
I tried to generate access token and used below query to add owner via Postman.
https://graph.microsoft.com/v1.0/groups/groupid/owners/$ref
But I am facing 403 Forbidden error like below:
{ "error": { "code": "Authorization_RequestDenied", "message":
"Insufficient privileges to complete the operation.", "innerError": {
"date": "2022-06-29T05:42:38", "request-id":
"ebd01257-b890-4b3d-8c22-a1b34738e5a6", "client-request-id":
"ebd01257-b890-4b3d-8c22-a1b34738e5a6" } }
I have granted API permissions like below:
What else permissions are needed? Is there any other way instead of Postman?
You can make use of Microsoft Graph Explorer instead of Postman that doesn't require you to generate access token separately.
You can call the same query by granting below permissions based on your account type:
I tried to reproduce the same in my environment and added the owner to the Azure AD group like below:
After running the above query, owner added successfully like below:
To confirm the above, check the portal whether the owner is added or not like below:
Reference:
Add owners - Microsoft Graph v1.0 | Microsoft Docs

Azure media service v3 - Create job with sas url is failing due to Access issue

I'm trying to create a asset from code, but i'm getting below error:
{
"error": {
"code": "Conflict",
"message": "The server received a 403 Forbidden error when accessing Azure Storage. Please check your permissions to the storage accounts linked to the media account.",
"details": [
{
"code": "AuthorizationFailure",
"message": "The server received a 403 Forbidden error when accessing Azure Storage. Please check your permissions to the storage accounts linked to the media account."
}
]
}
}
Also, I tried directly in portal with generated sas url, though I'm facing access issue, I can confirm AAD service principle has assigned "contributor" role, but still I get error.
Error:
The client 'xx' with object id 'xx' does not have authorization to perform action 'Microsoft.Media/mediaservices/assets/write' over scope '/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Media/mediaservices/itskssearchmediadev/assets/ignite-mp4-20220207-192422' or the scope is invalid. If access was recently granted, please refresh your credentials.
What else permission do I need to provide?
Note: I also tried with my personal a/c which has full access, it works there.
The Storage Account Contributor role permits management of storage accounts (e.g., creating and deleting storage accounts), but it does not permit access to data in the storage account.
To allow Media Services to write to the storage account, the Managed Identity must be granted a role that has access to the storage account data, for example, Storage Blob Data Contributor.

Authentication with Openshift API running on IBM cloud failed with 401 unauthorized error

I'm trying to build some application to manage my OpenShift cluster on IBM cloud and the first step is to authenticate against both IBM cloud and the OpenShift cluster.
https://cloud.ibm.com/docs/openshift?topic=openshift-cs_api_install#kube_api
I followed the steps describe in the above link, and successfully obtained all the tokens including 'access_token', 'id_token' and 'refresh_token'. Among them the 'id_token' is supposed to be used to authenticate against the OpenShift API.
With the access_token I can visit IBM cloud API successfully, like obtaining account, cluster information.
However, when I use the id_token to call OpenShift API, it failed with the following error. It happened even for the '/version' api, which can be accessed without providing a bearer token.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
I can verify that my account have correct service roles assigned as described here, and I can see corresponding roles with 'ibm' prefix assigned in OpenShift web portal as well.
Can anyone please verify that the instructions in the first link above is still valid or have any clue about what might have been wrong?
[Update]
To help troubleshooting, I paste a sample of tokens here, this is what I get for the step3 in the 'Working with your cluster by using the Kubernetes API' section in the link, it is a bit lengthy:
{
"access_token": "eyJraWQiOiIyMDIxMDIxOTE4MzUiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC0yNzAwMDU1WERHIiwiaWQiOiJJQk1pZC0yNzAwMDU1WERHIiwicmVhbG1pZCI6IklCTWlkIiwianRpIjoiMDY1OWI5MjktMDE1Zi00MDg0LTgwZWMtYmFhZjBhYTBkNDQ4IiwiaWRlbnRpZmllciI6IjI3MDAwNTVYREciLCJnaXZlbl9uYW1lIjoi6Iic5a2QIiwiZmFtaWx5X25hbWUiOiLpmYgiLCJuYW1lIjoi6Iic5a2QIOmZiCIsImVtYWlsIjoicmFmb3VsQDE2My5jb20iLCJzdWIiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiYXV0aG4iOnsic3ViIjoiY2hzemNoZW5AY24uaWJtLmNvbSIsImlhbV9pZCI6IklCTWlkLTI3MDAwNTVYREciLCJuYW1lIjoi6Iic5a2QIOmZiCIsImdpdmVuX25hbWUiOiLoiJzlrZAiLCJmYW1pbHlfbmFtZSI6IumZiCIsImVtYWlsIjoicmFmb3VsQDE2My5jb20ifSwiYWNjb3VudCI6eyJ2YWxpZCI6dHJ1ZSwiYnNzIjoiOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiLCJpbXNfdXNlcl9pZCI6Ijg4NzM1NzYiLCJmcm96ZW4iOnRydWUsImltcyI6IjM0NjU1MiJ9LCJpYXQiOjE2MTQyNTU5ODYsImV4cCI6MTYxNDI1OTU4NiwiaXNzIjoiaHR0cHM6Ly9pYW0uY2xvdWQuaWJtLmNvbS9pZGVudGl0eSIsImdyYW50X3R5cGUiOiJ1cm46aWJtOnBhcmFtczpvYXV0aDpncmFudC10eXBlOmFwaWtleSIsInNjb3BlIjoiaWJtIG9wZW5pZCBjb250YWluZXJzLWt1YmVybmV0ZXMiLCJjbGllbnRfaWQiOiJrdWJlIiwiYWNyIjoxLCJhbXIiOlsicHdkIl0sInN1Yl85Yzk3MjViZDEzZWE0NTY3ODhhYzBlZTc4ZDg2NDZlMSI6ImNoc3pjaGVuQGNuLmlibS5jb20iLCJpYW1faWRfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJJQk1pZC0yNzAwMDU1WERHIiwicmVhbG1lZF9zdWJfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJJQk1pZC1jaHN6Y2hlbkBjbi5pYm0uY29tIn0.Rm3F0UKz9Aq3-1xXMmkFi0UkENIvQUkRo6qhtWaG3LKBH5HHsZbAQeJUhKqXYbI643nj2ssDP2U50BVv-6zbpfmyVncP5Z5Dmi620mi2QesduRQaH1XlC-l7KuF3uT0hJ_9FSD-0Wqi5ph0pkKxHJ-BmLkHC-4F0NByiUtwIpwyTpthuzwC251XZsQ9Ya8gzCxHB9DFb3tzOF3cupVVZmc2mMJbv4JuTSnP00H5rOT4yIzeI0Lqm6LhDpMRJ4P8glmIxmU6fag42P94pFNf3jEzIZGl49NINiWXlKbAleij3vSouobtYvrBmxWQF4KpuwKPEI-bMf1zpsHPYBHWidg",
"id_token": "eyJraWQiOiIyMDIxMDIxOTE4MzUiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC0yNzAwMDU1WERHIiwiaXNzIjoiaHR0cHM6Ly9pYW0uY2xvdWQuaWJtLmNvbS9pZGVudGl0eSIsInN1YiI6ImNoc3pjaGVuQGNuLmlibS5jb20iLCJhdWQiOiJrdWJlIiwiZ2l2ZW5fbmFtZSI6IuiInOWtkCIsImZhbWlseV9uYW1lIjoi6ZmIIiwibmFtZSI6IuiInOWtkCDpmYgiLCJlbWFpbCI6InJhZm91bEAxNjMuY29tIiwiZXhwIjoxNjE0MjU5NTg2LCJzY29wZSI6ImlibSBvcGVuaWQgY29udGFpbmVycy1rdWJlcm5ldGVzIiwiaWF0IjoxNjE0MjU1OTg2LCJhdXRobiI6eyJzdWIiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiaWFtX2lkIjoiSUJNaWQtMjcwMDA1NVhERyIsIm5hbWUiOiLoiJzlrZAg6ZmIIiwiZ2l2ZW5fbmFtZSI6IuiInOWtkCIsImZhbWlseV9uYW1lIjoi6ZmIIiwiZW1haWwiOiJyYWZvdWxAMTYzLmNvbSJ9LCJzdWJfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiaWFtX2lkXzljOTcyNWJkMTNlYTQ1Njc4OGFjMGVlNzhkODY0NmUxIjoiSUJNaWQtMjcwMDA1NVhERyIsInJlYWxtZWRfc3ViXzljOTcyNWJkMTNlYTQ1Njc4OGFjMGVlNzhkODY0NmUxIjoiSUJNaWQtY2hzemNoZW5AY24uaWJtLmNvbSIsImdyb3Vwc185Yzk3MjViZDEzZWE0NTY3ODhhYzBlZTc4ZDg2NDZlMSI6WyJkZXZvcHMtYWRtaW4tdnBuLW9ubHkiLCJkZXZvcHMtZGVmYXVsdC12cG4tb25seSIsIklQV0MgQWRtaW4iXX0.Y42KUJRGgZA9OV164GAKSF0W5rRNGf3x32YXrAo5UvKhpOK0k4r_hwZU5BZhI2y3t-UqM7lNOIxexpft2Zmc9ApQ6BlVN-iN1jcfBzxmrUPMObpc1-vDrAc9Sq84J8nYzy1Rk32ydFHeb3V2iDhJn14_NOnXwhuz9EFkSg0uUZHugTAPx5A-VcdrehceX0yOqAOfX5EzTtmHoI8-JQbfNt8pyBSJs8Eoag7_mtfNgx13bP_-M8W7tltCSHhPEO46gUurPFkvasHggConPQ_oBw3ANAvY8tDfivrGmdiR2Q-uc4SnFAjOgC77YskDLskBcOeehhBvxwDkyufztzqM6w",
"refresh_token": "OKDsw87zCujUXCmb4LZ3-DFQN7lUa0ejdqau_fL3Voms7M7DaKYgO07gZW29VQbcwdGc3z8jrQjjf_4gOutKyRCZ6LyEiSEKTZQ6Kovwqji02Puxu3fzIFB9f8-a1hMlkTtP4u32_FTCmOZA6ARvzxEyRX36CtQEzSVz-zVMsvPxdgyztUEWPTtvbr7aPn4eq209OzTGzTyPCBFR-N0gVp2tKLbIrGmyi_vgC-6xLRvR2nWGJsUwaaBjXwvICeCBY3qRJ90VyP1krBSHa72f1XJWpvLnBWHN8qo1dfPknHvknlEZ3kMUA87KZkynkgiVifhRq90oNAKYHhKJ4XRs2tyz05zW5a8qEhgoIVsslUzDLLNU1btRF_3g587dKckPzEav3BgQlCik4im8gIC74HFGZOz4P7z9QKLJHQY7ElDillH8pLRjW8Dx0yZvn8Yo5rSqJSj0zUmJxNZMUNEpF_DTQhHCePNOWu1_1q4o5cIb_Mv-mGMMVwrVUsJYUyaeV9O5cWl58eWlHQxS3SbuAjsBrzfSdcrIyFe5aQViyL_sL1-o54xFrMJPC3prPD25TS4vUOwAy7tc9r1AGZG00YUGaxPwzKcOWBI4DqksIiEKPOtcm3k0y24TuwRPa0AK-9jfYAzkx3rciBYGKbq1WOFjX-p6LH67ayxVUJcQcjSMe-35LZnsHQtc0VOxNHjJKdJiHsKOYEDY1Nz0k4zGZr1EZ6j7w4tLpBXP9ThC8hReiihWDmld9lzFdLwKZPF7jl4u03a2WQZ6j-wMHvLtOBcLDiKwEaeWaGp8v_YS3j4iGqkcAytf7z_-toD1O3ZHtIUlbe6H64IAVPKadN1Y1SD49Ouk1fk8xDFr7HQ4RuDTLfZnLGzC4vvzysCmJEX837Wjf2f9WdirEaKxoSlDDJKilt--20Ota-5CTimD8u0SttC6CD1Glj8bbAS8ddCAfVirDJty7FW3eyALvAHifKqzRa1kBDPHb305q91oSWYdzBKIlTinN9BAXDc3ZccVkWM6Y3VgUzh2iQwM0lKadts7OMwqhLDk7rukAXHRUpKxy-85rUf-a0oz41s69PXdQteoh559vEb0uyrq0kOnI1RnuJ7MaEGDC25Kfezumo0snwYRmQhXMPMeKkxBKxs9ZydKxxcp1qtLwFyHA6MhZuXRpZM9Qse9mqovNdHHOhAQIZu3J7HJusuVdg3SJhZkTH__gXpCc2hBeOpR0rPc6qZm7z2nU5pJQ2XgzH2TUm6psA",
"ims_user_id": 8873576,
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1614259586,
"refresh_token_expiration": 1616847976,
"scope": "ibm openid containers-kubernetes"
}
In addition, the following approach works but the token is obtained through the OpenShift web console, and thus cannot be obtained programmatically(at least I don't see how),
"Authorization: Bearer sha256~6V_OvZ5OoV8vnHF33Es5qsloAY-iXkLQ8dfl_Nsyn94"
Thanks!
You can not and should not send the ID-Token to get access to APIs, its only meant to be used by the client who did the initial authentication. It also typically have a very short lifetime (like 5 minutes in some implementation).
The only purpose of the ID-token is basically o create the local user session.
On the page you refer to it says at the end:
ID token: Every IAM ID token that is issued via the CLI expires after
one hour. When the ID token expires, the refresh token is sent to the
token provider to refresh the ID token. Your authentication is
refreshed, and you can continue to run commands against your cluster.
It sounds like they mean the access token. In openID connect you don't renew your ID-token (what I am aware of)
Have been busy in the past few days, I will share how I solved this problem here. In fact it didn't address the original issue, but is another way to achieve the goal.
So it turned out that there was another doc regarding how the access token can be obtained(Yes, as mentioned by #Tore Nestenius it should be an access token instead of an id token). The token described here is actually the same as what one would get through the Openshift web console. And basically it has nothing to do with the previous link I shared in the question.

Restrict gcloud service account to specific bucket

I have 2 buckets, prod and staging, and I have a service account. I want to restrict this account to only have access to the staging bucket. Now I saw on https://cloud.google.com/iam/docs/conditions-overview that this should be possible. I created a policy.json like this
{
"bindings": [
{
"role": "roles/storage.objectCreator",
"members": "serviceAccount:staging-service-account#lalala-co.iam.gserviceaccount.com",
"condition": {
"title": "staging bucket only",
"expression": "resource.name.startsWith(\"projects/_/buckets/uploads-staging\")"
}
}
]
}
But when i fire gcloud projects set-iam-policy lalala policy.json i get:
The specified policy does not contain an "etag" field identifying a
specific version to replace. Changing a policy without an "etag" can
overwrite concurrent policy changes.
Replace existing policy (Y/n)?
ERROR: (gcloud.projects.set-iam-policy) INVALID_ARGUMENT: Can't set conditional policy on policy type: resourcemanager_projects and id: /lalala
I feel like I misunderstood how roles, policies and service-accounts are related. But in any case: is it possible to restrict a service account in that way?
Following comments, i was able to solve my problem. Apparently bucket-permissions are somehow special, but i was able to set a policy on the bucket that allows access for my user, using gsutil:
gsutils iam ch serviceAccount:staging-service-account#lalala.iam.gserviceaccount.com:objectCreator gs://lalala-uploads-staging
After firing this, the access is as-expected. I found it a little bit confusing that this is not reflected on the service-account policy:
% gcloud iam service-accounts get-iam-policy staging-service-account#lalala.iam.gserviceaccount.com
etag: ACAB
Thanks everyone