Google Cloud Storage permission denied - google-cloud-storage

I set up a Cloud Run which uses a Bucket on Cloud Storage. Locally I run it in a Docker Container, the credentials are passed using a json file, created and downloaded from IAM & Admin, and it works. When deployed, writing to the bucket throws an error:
{
500 unable to sign bytes: googleapi: Error 403: Permission 'iam.serviceAccounts.signBlob' denied on resource (or it may not exist).
Details:
[{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"domain": "iam.googleapis.com",
"metadata": {
"permission": "iam.serviceAccounts.signBlob"
},
"reason": "IAM_PERMISSION_DENIED"
}]
[]
}
Any idea?

I had to add the Service Account Token Creator to the service account. I did it, but it did not work anyway because there is the need to deploy a new version of the service, so:
Add the role Service Account Token Creator
Deploy new version of the service

Related

403 :Permission denied while hitting google developer reporting API

i am trying to hit https://playdeveloperreporting.googleapis.com/v1alpha1/apps/com.example/anrRateMetricSet.
It returns 403: permission denied
Here i am trying to capture the health metrics for my android app from Google console.
Reference:https://developers.google.com/play/developer/reporting/reference/rest/v1alpha1/vitals.anrrate/get?hl=en_GB
So i have Service account created on google console under "I AM and Admin"
I am trying to hit "https://playdeveloperreporting.googleapis.com/v1alpha1/apps/com.example/anrRateMetricSet"
By using access token created from key.JSON downloaded from linked Service account to the project.
But i am getting
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"status": "PERMISSION_DENIED"
}
}
Could n't understand what might be the issue. Since the Service account is linked to project.
Your help is highly appreciated . Thank you

Problem regarding google cloud bucket access permission

I am working on a colab project with google cloud bucket. At first, I use my own Gmail account A, but I notice that I need a google service account for some operations. So I activate a service account B and I successfully log in with this service account.
But here are still a permission error:
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "gmailaccountA#gmail.com does not have storage.objects.list access to the Google Cloud Storage bucket.",
"errors": [
{
"message": "gmailaccountA#gmail.com does not have storage.objects.list access to the Google Cloud Storage bucket.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
When I double check and run the "gcloud auth list", I get two active accounts, one is my gmail account A and one is my service account B. How could I make sure I am using the service account?
In order to set the account you want to use you can first list them to check the ones available using
gcloud auth list
and to set the chosen one use:
gcloud config set account ACCOUNT
You can read more about the gcloud config set command and its properties here

Azure media service v3 - Create job with sas url is failing due to Access issue

I'm trying to create a asset from code, but i'm getting below error:
{
"error": {
"code": "Conflict",
"message": "The server received a 403 Forbidden error when accessing Azure Storage. Please check your permissions to the storage accounts linked to the media account.",
"details": [
{
"code": "AuthorizationFailure",
"message": "The server received a 403 Forbidden error when accessing Azure Storage. Please check your permissions to the storage accounts linked to the media account."
}
]
}
}
Also, I tried directly in portal with generated sas url, though I'm facing access issue, I can confirm AAD service principle has assigned "contributor" role, but still I get error.
Error:
The client 'xx' with object id 'xx' does not have authorization to perform action 'Microsoft.Media/mediaservices/assets/write' over scope '/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Media/mediaservices/itskssearchmediadev/assets/ignite-mp4-20220207-192422' or the scope is invalid. If access was recently granted, please refresh your credentials.
What else permission do I need to provide?
Note: I also tried with my personal a/c which has full access, it works there.
The Storage Account Contributor role permits management of storage accounts (e.g., creating and deleting storage accounts), but it does not permit access to data in the storage account.
To allow Media Services to write to the storage account, the Managed Identity must be granted a role that has access to the storage account data, for example, Storage Blob Data Contributor.

How to change credentials for a given service in Cloud Foundry

I would like to know how (and if it's possible) to change the URI credentials from a service in Cloud Foundry. More specifically, mLab service (free plan) from Pivotal Cloud Foundry.
Background
I created and pushed a nodejs app to Pivotal Cloud Foundry.
This app is bound to a mLab service using free plan.
When the mLab service was created using Pivotal website, it created a database with an user and password automatically.
Opening app settings inside Pivotal website, I can see the following environment variables. Please notice the mongo uri inside credentials and name inside mLab.
{
"staging_env_json": {},
"running_env_json": {},
"system_env_json": {
"VCAP_SERVICES": {
"mlab": [
{
"label": "mlab",
"provider": null,
"plan": "sandbox",
"name": "users",
"instance_name": "users",
"binding_name": null,
"credentials": {
"uri": "mongodb://CloudFoundry_someusergenerated:apasswordgeneratedautomatically#somehost.mlab.com:someport/CloudFoundry_database_name"
}
}
]
}
},
"application_env_json": {
"VCAP_APPLICATION": {
"cf_api": "https://donotuseapi.run.pivotal.io",
"application_name": "website",
"application_uris": [
"xxx.cfapps.io"
],
"name": "website",
"space_name": "space1",
"uris": [
"xxx.cfapps.io"
]
}
}
}
The connection with the database works fine with this default user and password. In order to get the mongodb uri from the environment variables, I am using npm package cfenv
const appEnv = require('cfenv').getAppEnv();
const env = process.env;
keys = {
mongodb: {
dbURI: appEnv.getServiceURL(env.MONGO_SERVICE_NAME)
}
};
In my manifest.yaml file I have this MONGO_SERVICE_NAME specified accordingly with the service name inside the environment variables.
---
applications:
- name: website
memory: 128M
disk_quota: 256M
random-route: true
buildpack: nodejs_buildpack
health-check-type: port
env:
MONGO_SERVICE_NAME: 'users'
Again, the db connection works fine.
===
Then I opened mLab website for this particular database and created a new database user.
Now I want to update the credentials.uri from VCAP_SERVICES (environment variables) for this particular service to use the new user and password.
As far as I coould see, cf update-service CLI command is not meant by that so I am wondering if this is a limitation from Cloud Foundry, Pivotal or mLab. I could bet that this limitation is due to the fact I am using Pivotal trial account and mLab free plan, however my question still the same if I upgrade plans.
Thanks,
As a user, it's not possible for you to change VCAP_SERVICES entries that are generated by a service brokers. These are fixed and not possible to change.
If you cf create-service'd a service, then it was created by a service broker and you get exactly what the service provider gives you. As was mentioned in the comments, depending on the service broker you may be able to pass arguments to the broker with cf create-service -c. Check the documentation of your service provider to see if there is an option to influence the credentials generated in the way that you would like.
If your service provider does not provide options to do what you want, you can create a service key instead of binding your service to an app. This will give you a set of credentials to your service that will last for the duration of the service key. The service key credentials are not automatically passed into an app, but you can feed them into your app in a variety of ways.
You can pass them in through environment variables.
You can create a user provided service (cf cups).
You can pass them through in application config or through a config server.
With all of these options, you are passing the credentials through to the app so you could in theory adjust or alter them before they get to your app.
Hope that helps!

Cannot connect to secured Azure Service Fabric Cluster via Powershell or Visual Studio

I've created a Service Fabric Application currently consisting of two Reliable Services and a Reliable Actor. For development, I created an SQL Server and database in Azure, and hardcoded the connection string into my application, which I was running on my local SF cluster. This worked fine, and I was able to run my application locally whilst manipulating the database in the cloud.
I now want to publish my service to the cloud, and run it all remotely (so that I can set up and test the Web API is exposes), and this is where the problems start.
Following Azure docs:
Create a Service Fabric cluster in Azure using Azure Resource Manager
Connect to a secure cluster
Configure secure connections to a Service Fabric cluster from Visual Studio
Service Fabric cluster security scenarios
Publish an application to a remote cluster by using Visual Studio
Add or remove certificates for a Service Fabric cluster in Azure
I have taken the following steps:
Used Powershell (with ServiceFabricRPHelpers cmdlets) to create a KeyVault resource group, and within that a KeyVault.
Used New-SelfSignedCertificate with -DnsName set to api.mydomain.co.uk, which I have already purchased and created a CNAME record for api leading to mycluster.northeurope.cloudapp.azure.com:19000 (though of course it doesn't exist at this stage of the process), followed by Export-PfxCertificate to create the .pfx file. The .pfx was then imported to cert:\CurrentUser\TrustedPeople and cert:\CurrentUser\My.
Called Invoke-AddCertToKeyVault to add the newly generated certificate to my KeyVault.
Used the SetupApplications.ps1 script to configure AAD.
Placed all resulting strings etc. into azuredeploy.json and azuredeploy.parameters.json, resolved errors (some of which seemed to contradict the documentation..), and successfully deployed the cluster. It is now visible on my Azure Portal.
Assigned User Roles (admin to myself) from the classic portal.
Used Invoke-AddCertToKeyVault to (this time create and) add a second, "admin client" certificate to the cluster (as opposed to the first which was a cluster certificate).
So, with all of that done, I believe I should have done everything I need to in order to be able to connect to the cluster to publish via VS2015, and access the management interface from api.mydomain.co.uk:19080. Alas, that doesn't happen...
Connection to the database within the resource group my cluster still works from VS via the SQL Server Explorer using SQL authentication, however, any attempt to communicate with the server itself using AAD or X509 based authentication results in a wait while it tries to connect, and then failure. A few examples:
Trying to connect to the management console says it's blocked, which implies to me it is there, but all the documentation ends before telling me how to access it.
Attempting to connect using Connect-ServiceFabricCluster also fails, and searching the error messages hasn't given me any indication of what to do.
After spending two days absorbing all of this and trying to get it working, I'm all out of ideas on what to try and change. Can anyone find a problem in what I have done, or suggest anything I could try?
If you need more details from me then please just ask!
I too had a nightmare attempting to deploy a secure cluster, using much of the same documentation you too have tried to consume. After spending days getting my hands dirty I managed to finally get it working.
Here is my own helper and template: SecureCluster
The key things to watch are:
Make sure your client and cluster certificates are both in your key vault and referenced within your ARM template under the OSProfile of the VM scale set (I noticed in your example that you were adding the client admin certificate after modifying the ARM template):
"osProfile": {
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"computernamePrefix": "[parameters('vmNodeType0Name')]",
"secrets": [
{
"sourceVault": {
"id": "[parameters('sourceVault')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "[parameters('clusterCertificateUrl')]"
},
{
"certificateStore": "My",
"certificateUrl": "[parameters('adminCertificateUrl')]"
}
]
}
]
},
This will make sure all your certificates are installed onto each node within the cluster.
Next is to make sure that the Service Fabric extension within the scale set also has your certificate:
"extensions": [
{
"name": "[concat(parameters('vmNodeType0Name'),'_ServiceFabricNode')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": false,
"protectedSettings": {
"StorageAccountKey1":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[parameters('vmNodeType0Name')]",
"dataPath": "D:\\\\SvcFab",
"durabilityLevel": "Bronze",
"certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
}
},
"typeHandlerVersion": "1.0"
}
},
Finally, under the Service Fabric resource section within the ARM template make sure you specify which certificates to use for node to node security and which is for client to node security.
certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
},
"clientCertificateCommonNames": [],
"clientCertificateThumbprints": [{
"CertificateThumbprint": "[parameters('adminCertificateThumbPrint')]",
"IsAdmin": true
}],
You should then be able to securely connect to the cluster in the way you are attempting to. Although one thing I have found is that the URL shouldbn't be prefixed with "http" within the publish profile and when trying you browse to the explorer you will need the URL to be https://[n]:19080/Explorer/index.html
Hopefully you will find this of some help.