403 forbidden when trying to create a bucket using Deployment Manager - google-cloud-storage

I am trying to create a GCS bucket using Deployment Manager using the following resource config:
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
However, I get the following error:
- code: RESOURCE_ERROR
location: /deployments/the-bucket/resources/upload-bucket
message: '{"ResourceType":"storage.v1.bucket","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","reason":"forbidden"}],"message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/upload-bucket","httpMethod":"GET","suggestion":"Consider
granting permissions to 205531008256#cloudservices.gserviceaccount.com"}}'
The role of 205531008256#cloudservices.gserviceaccount.com is Project Editor by default (which surely has enough permissions?), however I've also tried adding Storage Admin and Project Owner - neither seems to help.
My 2 questions are:
Why it is trying to use this service account?
How can I get Deployment Manager to be able to create a bucket?
Thanks

I ran into the exact same problem. Allow me to restate Andres S's answer more clearly.
When you wrote
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
you probably intended create a bucket called <unique-bucket-name> and figured that upload-bucket would just be a name to refer to this bucket in the Deployment Manager. What GCP actually did was attempt to use upload-bucket as the actual bucket name. As far as I can tell, <unique-bucket-name> is never used. This caused a problem, since someone else already owns the bucket upload-bucket.

Try this code, I think you are specifying the name twice.
resources:
- type: storage.v1.bucket
name: <unique-bucket-name>
properties:
project: <project-id>

I recently run into similar issue, where Deployment Manager failed to create the bucket.
I have verified that:
the permissions are not an issue as the same deployment contained other bucket that was created.
the bucket name is not an issue as I was able to create the bucket manually.
After some googling I found there is other way to create the bucket. Instead of using type: storage.v1.bucket you can also use type: gcp-types/storage-v1:buckets.
So my final solution was to create the bucket like this:
- name: images-bucket
type: gcp-types/storage-v1:buckets
properties:
name: images-my-project-name
location: "eu"

Related

Cloudformation submitted information does not contain changes when updating task formation image version

If my cloud formation script is like this:
myServiceName:
Type: "AWS::ECS::Service"
Properties:
ServiceName: "myServiceName"
TaskDefinition: !Ref myTaskName
myTaskName:
Type: "AWS::ECS::TaskDefinition"
Properties:
ContainerDefinitions:
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.1"
And I update the task definition to 1.1.2
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.2"
Then trying to run a Cloud formation update command gives me this error:
*Submitted information does not contain changes. *
Is it just not possible to update the task definition to point to a new image in an ecr with out changing the service?
All the documentation I've read says that this error comes up when you don't change any properties of your resource, so Cloudformation doesn't see any resources as changed, and therefore won't redeploy.
But you are changing a property, and yet it's still happening, which is weird. I haven't been able to find any record of such behavior.
Debugging suggestion: try adding an arbitrary new property to your resource, e.g. a tag field. If it updates successfully, it means for some reason the changed Image doesn't trigger an update, and the fix would be to always change something else too. If it still doesn't update, then I suspect something is going wrong somewhere else in your process and you're not actually uploading your changed template at all.
I found the following in the CloudFormation User Guide that may help.
Troubleshooting CloudFormation - No updates to perform
I encountered an issue adding a DeletionPolicy attribute (which is not a property). According to the documentation, adding/changing metadata will cause CloudFormation to accept certain changes.

vault-secrets-provider alias not recognized with docker-kaniko

I'm having some issues when trying to use Hashicorp vault template (kubernetes with Google Kubernetes Engine) with to.be.continuous.
Actually when I use it with Google Docker Kaniko layer I got an error message: ... wget: bad address 'vault-secrets-provider'.
It seems that Kaniko doesn't recognize the vault-secrets-provider layer. Would you please help me with this? Or perhaps, where I can ask for some help?
This is a summary of .gitlab-ci.yml
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "templates/gitlab-ci-k8s-vault.yml"
...
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
Error Message:
[ERROR] Failed getting secret K8S_DEFAULT_KUBE_CONFIG:
... wget: bad address 'vault-secrets-provider'
I tried many times directly without Vault layer and Kaniko works ok, I mean without Vault secrets.
How I can accomplish this? I tried modifying the kaniko template but without success.
I will appreciate any help with this.
To fix your issue, first upgrade the docker template to its latest version (2.3.0 at the time this response was written).
Then depending on your case you have 2 options:
Docker needs to handle some of your secrets managed by Vault: then you shall also activate the Vault variant for Docker,
Docker doesn't needs to handle any secret managed by Vault: don't use the Vault variant for Docker, you'll have a warning message from Docker not being able to decode the secret (basically the same as the one you had, but not failing the build),
You shall simply use it in your .gitlab-ci.yml file:
include:
# Docker template
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker.yml'
# Vault variant for Docker (depending on your above case)
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker-vault.yml'
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "/templates/gitlab-ci-k8s-vault.yml"
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"

setup an aws api gatway with serverless

I built out my dev environment manually, I wanted to focus on logic and skip the learning curve on serverless, but before deploying to prod I want to standardize and parameterize my stack.
setting up my dynamodb tables has been straight forward, but I'm running into snags with deploying a new api-gateway.
I've been using aws codebuild to package layers for lambda functions and an s3 bucket to store my lambda code.
Let's take my dev-rest-auth api (custom authentication) as an example.
I have several resources for login/out, passwords and registration; most are protected by a custom authorizer (login/logout aren't) and all have cors policies. I'm using a custom domain account-api.dev.example.com I use several dynamodb tables for housing user data (let's avoid security discussions please, I'm not storing raw passwords and am encrypting using leading industry standards) and temporary codes (password reset & account verification).
to test serverless implementation I'd like to build a yaml file that recreates my existing infrastructure - so the first question is -- is that possible? Can I parameterize the deployment of an API gateway, with custom authorizer, custom domain, and several lambdas?
Next question is how?
Organizationally I'm breaking out my yml files by resource:
I have several dynamodb yml files that look like this:
Resources:
UserTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: ${self:custom.resource-prefix}-UserTable-${self:custom.stage}
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
This was a much earlier attempt (several months ago, from googling, but I don't remember where I found it or what it does) of standing up an API gateway:
Resources:
SharedGW:
Type: AWS::ApiGateway::RestApi
Properties:
Name: SharedGW
Outputs:
apiGatewayRestApiId:
Value:
Ref: SharedGW
Export:
Name: SharedGW-restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- SharedGW
- RootResourceId
Export:
Name: SharedGW-rootResourceId
I pull everything together in a serverless.yml file that references the resource files like this:
...
resources:
# S3 Bucket
- ${file(resources/s3/s3-static-host.yml)}
- ${file(resources/s3/s3-CodeBuildResults.yml)}
# DynamoDB
- ${file(resources/dynamodb/dynamodb-mealtable.yml)}
- ${file(resources/dynamodb/dynamodb-ziptable.yml)}
- ${file(resources/dynamodb/dynamodb-usertable.yml)}
- ${file(resources/dynamodb/dynamodb-passwordresettable.yml)}
- ${file(resources/dynamodb/dynamodb-accountregistrationtable.yml)}
- ${file(resources/dynamodb/dynamodb-restaurant_table.yml)}
# DNS Records (Route 53)
# TODO: Determine why DNS hangs
# - ${file(resources/route_53/dev_dns.yml)}
# Gateways
- ${file(resources/api_gateway/local_rest_auth.yml)}
# - ${file(resources/api_gateway/rest_auth.yml)}
...
I've seen several examples of connecting a lambda to a gateway, but it's not clear where the gateway is being created), it's also not clear how the lambda is being created/if I'd be able to reference layers/function code in s3.
I've seen some tutorials for doing this with aws amplify via the cli, but my dream-state would be that I could effectively create a new aws account, deploy this serverless and have my site up and running automatically - with just a little route 53 work to point to a new domain.

Create cloudformation resource multiply times

I've just moved to cloud formation and I am starting with creating ECR repositories for docker,
I need all repositories to have the same properties except the repository name.
Since this is micro-services I will need at least 40 repo's so I want to create a stack that will create the repo's for me in a loop, and just change the name.
I started looking at nested stacks and this is what I got so far:
ecr-root.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: ECR docekr repository
Parameters:
ECRRepositoryName:
Description: ECR repository name
Type: AWS::ECR::Repository::RepositoryName
Resources:
ECRStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://cloudformation.s3.amazonaws.com/ecr-stack.yaml
TimeoutInMinutes: '20'
Parameters:
ECRRepositoryName: !GetAtt 'ECRStack.Outputs.ECRRepositoryName'
And ecr-stack.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
ECRRepositoryName:
Description: ECR repository name
Default: panpwr-mysql-base
Type: String
Resources:
MyRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName:
ref: ECRRepositoryName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
-
Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- "arn:aws:iam::123456789012:user/Bob"
- "arn:aws:iam::123456789012:user/Alice"
Action:
- "ecr:GetDownloadUrlForLayer"
- "ecr:BatchGetImage"
- "ecr:BatchCheckLayerAvailability"
- "ecr:PutImage"
- "ecr:InitiateLayerUpload"
- "ecr:UploadLayerPart"
- "ecr:CompleteLayerUpload"
RepositoryNameExport:
Description: RepositoryName for export
Value:
Ref: ECRRepositoryName
Export:
Name:
Fn::Sub: "ECRRepositoryName"
Everything is working fine,
But when I'm running the stack it asks me for the repository name I want to give it, and it creates one repository.
And then I can have as many stacks that I want with a different name but that is not my purpose.
How do I get it all in one stack that creates as many repositories that I want?
Sounds like you want to loop through a given list of parameters. Looping is not possible in a CloudFormation template. Few things you can try
You could programmatically generate a template. The troposphere Python library provides a nice abstraction to generate templates.
Write custom resource backed by AWS lambda. You can handle your custom logic in the AWS lambda function .
The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. Use AWS CDK to write custom script for your usecase.

Concourse: What is the difference between "Resource Types" and "Resource"?

When i developing pipeline i can't understand the difference between "Resource Types" and "Resource".
According to documentation Resource type is there only to provide the type of the resource and check for the tags. Like in example bellow:
---
resource_types:
- name: rss
type: docker-image
source:
repository: suhlig/concourse-rss-resource
tag: latest
resources:
- name: booklit-releases
type: rss
source:
url: http://www.qwantz.com/rssfeed.php
jobs:
- name: announce
plan:
- get: booklit-releases
trigger: true
Why do we need both of them? isn't it enough just to use resources?
I'm also new to this project. Please correct me if I am wrong.
I think in the term of the container:
A resource type is an image and we need to config the repository and tag in its source so that the concourse can locate/download it.
A resource is a container which is an instance of that image and can be used in the jobs when the pipeline is running. Its source that we configure is the common parameters which will be passed on the stdin to the check, in and out scripts when the resource is configured in a get or put step.
I think it's a little similar to the comparison between the class and object.