Creating a bucket using Google Cloud Platform Deployment Manager Template - google-cloud-storage

I'm trying to create a bucket using GCP Deployment Manager. I already went through the QuickStart guide and was able to create a compute.v1.instance. But I'm trying to create a bucket in Google Cloud Storage, but am unable to get anything other than 403 Forbidden.
This is what my template file looks like.
resources:
- type: storage.v1.bucket
name: test-bucket
properties:
project: my-project
name: test-bucket-name
This is what I'm calling
gcloud deployment-manager deployments create deploy-test --config deploy.yml
And this is what I'm receiving back
Waiting for create operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd: <ErrorValue
errors: [<ErrorsValueListEntry
code: u'RESOURCE_ERROR'
location: u'deploy-test/test-bucket'
message: u'Unexpected response from resource of type storage.v1.bucket: 403 {"code":403,"errors":[{"domain":"global","message":"Forbidden","reason":"forbidden"}],"message":"Forbidden","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/test-bucket"}'>]>
I have credentials setup, and I even created an account owner set of credentials (which can access everything) and I'm still getting this response.
Any ideas or good places to look? Is it my config or do I need to pass additional credentials in my request?
I'm coming from an AWS background, still finding my way around GCP.
Thanks

Buckets on Google Cloud Platform need to be unique.
If you try to create a bucket with a name that is already used by somebody else (on another project), you will receive an ERROR MESSAGE. I would test by creating a new bucket with another name.

Related

ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not parse service account URL

I want to use a custom Service Account to build my docker container with Cloud Build. Using gcloud iam service-accounts list and gcloud config list I have confirmed that the service account is in the project environment and I used gcloud services list --enabled to check that cloudbuild.googleapis.com is enabled. I get the error: ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not parse service account URL. I tried all of the available service accounts and I tried with and without the prefix path. What is the correct URL or config after steps to get the service account working?
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/my-project-id/my-app']
images: ['gcr.io/my-project-id/my-app']
serviceAccount: 'projects/my-project-id/serviceAccount/my-sa#my-project-id.iam.gserviceaccount.com'
options:
logging: CLOUD_LOGGING_ONLY
The build config for serviceAccount references this page and there's an example that shows the structure:
projects/{project-id}/serviceAccounts/{service-account-email}
So, it follows Google's API convention of a plural noun (i.e. serviceAccounts) followed by the unique identifier.
Another way to confirm this is via APIs Explorer for Cloud Build.
The service's Build resource defines serviceAccount too.

Google Cloud Composer Environment Setup Error: Connect to Google Cloud Storage

I am trying to create an environment in Google Cloud Composer. Link here
When creating the environment from scratch and selecting all the default fields, the following error appears:
CREATE operation on this environment failed 22 hours ago with the following error message:
CREATE operation failed. Composer Agent failed with: Cloud Storage Assertions Failed: Unable to write to GCS bucket.
GCS bucket write check failed.
I then created a google cloud storage bucket within the same project to see if that would help and the same error still appears.
Has anyone been able successfully create a Google Cloud Composer environment and if so please provide guidance on why this error message continues to appear?
Update: Need to update permissions to allow access it seems like. Here is a screenshot of my permissions page but not editable.
It seems like you haven't given the required IAM policies to the service account. I would advise you to read more about the IAM policies on Google Cloud here
When it comes to the permissions of the bucket, there are permissions like the Storage Object Admin that might fit your needs.

Create Service Connection from Azure DevOps to GCP Artifact Registry

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

API: sqs:CreateQueue always ACCESS DENIED

I'm trying to create an sqs queue with cloudformation but I keep getting this error in the console.
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
Obviously I'm missing some sort of permission. This guide didn't really specify how I could resolve this.
Here's the code I made:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-test
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "MyDLQ"
- "Arn"
maxReceiveCount: 4
Tags:
-
Key: "ProjectName"
Value: "project-x"
MyDLQ:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-dlq-test
I'm trying to understand this doc. But I'm not sure how I could attach a policy to allow creation of queues. Someone please give me a full example.
tyron's comment on your question is spot on. Check permissions of the user executing the CloudFormation. If you're running commands directly, this is usually pretty easy to check. In some cases, you may be working with a more complicated environment with automation.
I find the best way to troubleshoot permissions in an automated world is via CloudTrail. After any API call has failed, whether from the CLI, CloudFormation, or another source, you can look up the call in CloudTrail.
In this case, searching for "Event Name" = "CreateQueue" in the time range of the failure will turn up a result with details like the following:
Source IP Address; this field may say something like cloudformation.amazonaws.com, or the IP of your machine/office. Helpful when you need to filter events based on the source.
User name; In my case, this was the EC2 instance ID of the agent running the CFN template.
Access Key ID; For EC2 instances, this is likely a set of temporary access credentials, but for a real user, it will show you what key was used.
Actual event data; Especially helpful for non-permissions errors, the actual event may show you errors in the request itself.
In my case, the specific EC2 instance that ran automation was out of date and needed to be updated to use the correct IAM Role/Instance Profile. CloudTrail helped me track that down.
If you are using AWS CodePipeline (where you may be using AWS CodeBuild to run & deploy your CloudFormation stack), remember your CodeBuild role (created under IAM Roles) must have the correct permissions.
You can identify which role is being used & attach required policies -
Open CodeBuild Project
Go to Build Details > Environment > Service Role
Open Service Role (hyperlinked)
Add SQS to role policies

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)