IBM Cloud: How to find details on deleted, but not reclaimed resource located in a deleted resource group? - ibm-cloud

I have deleted a service instance in my IBM Cloud account. I can see that it is listed for scheduled reclamation:
$ ibmcloud resource reclamations
List all resource reclamations under account someID as Henrik...
OK
ID Resource Instance ID Entity CRN State Target Time
eb51adc4-xxxx-xxxx-xxxx-b83ec1fb2f8f b4c05160-yyyy-yyyy-yyyy-fa42e60d7778 crn:v1:bluemix:public:cloudcerts:us-south:a/someID:b4c05160-yyyy-yyyy-yyyy-fa42e60d7778:: SCHEDULED 2021-09-28T06:31:01Z
When I try to look up details on the resource instance using the given ID, it cannot find it. Why? How can I see details?
$ ibmcloud resource service-instance b4c05160-yyyy-yyyy-yyyy-fa42e60d7778
Retrieving service instance b4c05160-yyyy-yyyy-yyyy-fa42e60d7778 in resource group default under account Henrik's Account as Henrik...
FAILED
Service instance b4c05160-yyyy-yyyy-yyyy-fa42e60d7778 was not found

If the resource group is still available, adding it to "ibmcloud resource service-instance" as parameter should help find the resource and fetch its details.
If the resource group has been deleted, then the resource controller API needs to be used. Use something like the following with TOKEN being an IAM access token.
curl -s -X GET https://resource-controller.cloud.ibm.com/v2/resource_instances/b4c05160-yyyy-yyyy-yyyy-fa42e60d7778 -H "Authorization: $TOKEN"

Related

Azure Release Pipeline does not have enough permissions to deploy Bicep/ARM template

When I try to deploy my Bicep template through a DevOps release pipeline I get the following error:
Deployment failed with multiple errors: 'Authorization failed for
template resource '1525ed81-ad25-486e-99a3-124abd455499' of type
'Microsoft.Authorization/roleDefinitions'. The client
'378da07a-d663-4d11-93d0-9c383eadcf45' with object id
'378da07a-d663-4d11-93d0-9c383eadcf45' does not have permission to
perform action 'Microsoft.Authorization/roleDefinitions/write' at
scope
'/subscriptions/8449f684-37c6-482b-8b1a-576b999c77ef/resourceGroups/rgabpddt/providers/Microsoft.Authorization/roleDefinitions/1525ed81-ad25-486e-99a3-124abd455499'.:Authorization
failed for template resource '31c1daec-7d4a-4255-8528-169fc45fc14d' of
type 'Microsoft.Authorization/roleAssignments'.
I understand through this post that I have to grant "something" the Owner or User Access Administrator role.
But I don't understand what user has the ObjectId 378da07a-d663-4d11-93d0-9c383eadcf45.
I tried to look it up with the following az CLI command, but it says that it cannot find a resource with that Id:
az ad user show --id 378da07a-d663-4d11-93d0-9c383eadcf45
The response it returns:
Resource '378da07a-d663-4d11-93d0-9c383eadcf45' does not exist or one of its queried reference-property objects are not present.
I'm a but clueless here. What do I exactly have to grant permission?
When you use service connection in DevOps pipeline, for example Azure Resource Manager service connection, it will create a service principal(app registry) on Azure portal-> Active Directory. You can find the service principal by clicking the link on service connection:
When you deploy with service connection, please make sure you have give correct permission for this service principal on target resource, like mentioned Microsoft.Authorization/roleDefinitions/write. Suggest to give contributor role on the resource. Otherwise it will reports the error in your pipeline log.
When you add the role, you will find the object id, it's different with service principal application ID or object id.
It's azure role not Azure AD role. You can find the difference in the doc.

Accessing argo workflow archive via http leads to permission denied error

I'm trying to access the Argo workflow archive via the REST API. The documentation states that I need to create a role and a token, so I that's what I did. A role with minimal permissions can be created like so:
kubectl create role jenkins --verb=list,update --resource=workflows.argoproj.io
And in fact this works, I can now access the argo server with a command like curl http://localhost:2746/api/v1/workflows/argo -H "Authorization: $ARGO_TOKEN".
However it seems that more permissions are needed to access endpoints such as /api/v1/archived-workflows, because all I get there is this:
{
"code": 7,
"message": "permission denied"
}
Presumably I need to specify other verbs and/or resources in the kubectl create role command, but I don't know which ones, and I can't find the relevant documentation. Any hints?
Looks like the role/serviceaccount/rolebinding created according to the docs only grant permissions to list Workflows in the argo namespace (whether archived or not).
The namespace can be specified for the Archive like so:
curl http://localhost:2746/api/v1/archived-workflows?listOptions.fieldSelector=metadata.namespace=argo -H "Authorization: $ARGO_TOKEN"

No resource group found in IBMCloud

I see that, I do not have the Default Resource Group associated to my IBMCloud account. Because of this I can't any resource to my account
When I run command for Viewing resources in a resource group, this is what I see:
PS C:\Users\SURANJANNANDI> ibmcloud resource service-instances -g Default
Retrieving instances with type service_instance in resource group Default in all locations under account Suranjan Nandi's Account as surnandi#in.ibm.com...
FAILED No resource group found
Did anybody have similar issues? Please advise how to fix this.
Try the command: ibmcloud resource groups
Or in the ibm cloud console, https://cloud.ibm.com/, check out the Manage > Account at the top. Click Resource groups on the left and see the list of possible resource groups.
I had this issue. The only way I could have Watson Studio working was through signing up with a new account in the Data Pak portal. There you may also manage your account (which didn't work for me, but may work for you).

"Access Denied. Provided scope(s) are not authorized" error when trying to make objects public using the REST API

I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:
Access denied. Provided scope(s) are not authorized.
Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).
Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.
Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?
Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache
When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.
Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.
And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:
gcloud beta compute instances set-scopes $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$COMPUTE_ZONE \
--scopes=https://www.googleapis.com/auth/devstorage.read_write \
--service-account=$SERVICE_ACCOUNT
One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.
Follow the documentation you provided, taking into account these points:
Access Control system for the bucket has to be Fine-grained (not uniform).
In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.
Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.
Create the json file as indicated in the documentation.
Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:
curl -X POST --data-binary #JSON_FILE_NAME.json \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
"https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"
You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes
Either select "Allow full access to all Cloud APIs" or select the fine-grained approach
So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.

Adding roles to service accounts on Google Cloud Platform using REST API

I want to create a service account on GCP using a python script calling the REST API and then give it specific roles - ideally some of these, such as roles/logging.logWriter.
First I make a request to create the account which works fine and I can see the account in Console/IAM.
Second I want to give it the role and this seems like the right method. However, it is not accepting roles/logging.logWriter, saying HttpError 400, "Role roles/logging.logWriter is not supported for this resource.">
Conversely, if I set the desired policy in console, then try the getIamPolicy method (using the gcloud tool), all I get back is response etag: ACAB, no mention of the actual role I set. Hence I think these roles refer to different things.
Any idea how to go about scripting a role/scope for a service account using the API?
You can grant permissions to a GCP service account in a GCP project without having to rewrite the entire project policy!
Use the gcloud projects add-iam-policy-binding ... command for that (docs).
For example, given the environment variables GCP_PROJECT_ID and GCP_SVC_ACC the following command grants all privileges in the container.admin role to the chosen service account:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SVC_ACC} \
--role=roles/container.admin
To review what you've done:
$ gcloud projects get-iam-policy $GCP_PROJECT_ID \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:${GCP_SVC_ACC}"
Output:
ROLE
roles/container.admin
(or more roles, if those were granted before)
Notes:
The environment variable GCP_SVC_ACC is expected to contain the email notation for the service account.
Kudos to this answer for the nicely formatted readout.
You appear to be trying to set a role on the service account (as a resource). That's for setting who can use the service account.
If you want to give the service account (as an identity) a particular role on the project and its resources, see this method: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy