Is there a way to prevent a cloud formation update from deleting or recreating a Cognito user pool resource? I'd like to remove the possibility of this from ever happening. Is there a way?
I found the answer. Use DeletionPolicy. Works for any Cloud Formation resource:
MyPool:
Type: AWS::Cognito::UserPool
DeletionPolicy: Retain
Related
I'm putting together a role/policy for running cloudformation/sam to limit access as much as I can. Is there a general set of policy actions that should be used to run create-stack?
This is for a codebuild which I'm using to create infrastructure using a cloudformation template during runtime of my application.
At the moment I've got a policy which allows full access, because it needs to create the infrastructure within the stack.
But there are only a subset of actions which cloudformation can actually perform and it doesn't need full access. For example, CF can't put items into a dynamodb table.
So this led me to think that maybe there's a basic role/policy that is limited to only the actions which cloudformation is able to perform.
If you're having to assign a role to a service (such as CodePipeline or CodeBuild) to deploy a stack, you do not only need to assign the necessary CloudFormation permissions (such as cloudformation:CreateStack or cloudformation:ExecuteChangeSet) but also permissions necessary for the deployment of the CloudFormation stack itself.
When you are deploying a stack manually, CloudFormation will use your user permissions to verify access to the services you are deploying/updating. When you're initiating the action from another AWS service, the same thing happens, but with the services from the service role. (Unless you are specifically assigning a role to the CloudFormation stack, documentation).
Keep in mind if you're constructing such a role, that CloudFormation might need more permissions than you think, such as extra read permissions, permissions to add tags to resources, permissions to delete and/or update those resources when you're deleting/updating the resources etc.
When I try to delete the Cloud Object Storage service in my Cloud Pak for Data as a Service account, I get this message:
An unexpected error occurred while attempting to delete service instance: Unexpected response code: 500 (Instance is in pending_reclamation state. Use API to reclaim the instance.)
What does this error mean? What should I do to really delete this service?
According to https://cloud.ibm.com/docs/hyper-protect-dbaas-for-mongodb?topic=hyper-protect-dbaas-for-mongodb-what-new#dec-2020:
When you delete a service instance, it's disabled (pending reclamation) rather than deleted completely. You can restore a deleted service instance with no data loss within the retention period of seven days. You can also choose to permanently delete it. See Deleting service instances.
This seems to be true for all IBM Cloud services, not just MongoDB. Using the command line interface, I was able to actually delete the Cloud Object Storage service I had.
I work in a company that's using Serverless to build cloud-native applications and services. Today we use DynamoDB and SQL Databases with AWS Aurora.
We want to go with DocumentDB for our next application, but we could not find anything about Serverless and AWS DocumentDB. Does Serverless support AWS DocumentDB? If not, is there any plans to support it in the future?
Serverless supports any AWS resources that you can define using CloudFormation. As per the Serverless docs here:
Define your AWS resources in a property titled resources. What goes in
this property is raw CloudFormation template syntax, in YAML...
The YAML for creating a DocumentDB cluster is, going to look something like:
resources:
Resources:
DBCluster:
Type: "AWS::DocDB::DBCluster"
DeletionPolicy: Delete
Properties:
DBClusterIdentifier: "MyCluster"
MasterUsername: "MasterUser"
MasterUserPassword: "Password1234!"
DBInstance:
Type: "AWS::DocDB::DBInstance"
Properties:
DBClusterIdentifier: "MyCluster"
DBInstanceIdentifier: "MyInstance"
DBInstanceClass: "db.r4.large"
DependsOn: DBCluster
You can find the other CloudFormation resources that you can define in the resources parameter of your Serverless.yaml here.
DocumentDB is not a serverless service. You need to manage the backend server to use it.
Please refer to this blog: https://blogs.itemis.com/en/serverless-services-on-aws, you can see it is not in the list of "SERVERLESS SERVICES ON AWS".
No, this won't support serverless, if you really want this you can go with DynamoDB. Also, can see differences if you want.
DocumentDB
MongoDB is supported in this database, which provide ease to learn
Stored procedures are needed in this, where data retrieval and data accumulation is done with help
Document size is limited to 16MB and storage is maximized up to 64TB of data.
Daily backups are managed by the database itself, and can be recovered whenever required
This is costly as we require paying around $200/month even if the user uses only some instances of database or only used few hours.
AWS is not involved in the user credentials stored area as that will be stored in DB directly
Available in specific regions
Can be easily migrated out of AWS into any MongoDB
In case of primary node failure, service promotes read-replica to primary. Multi A-Z has to be configured by users. Backup can be copied across regions
DynamoDB
MongoDB is not directly supported i this and even not easy to migrate from MongoDB to DynamoDB
Stored procedures are not needed in this, which makes the process easier for users
There is no limit in the document size as it can be scaled up to the size of user requirements
Daily backups are not available which makes the user too backup the data which triggered explicitly by users, and can be recovered whenever needed
There is initial cost associated with this, but overall cost is less. Also, on-demand pricing is available where user manage with the lesser amount of $1/month. 25GB data is provided for free in first stage.
AWS controls the user access to the database through identity and access management where authentication and authorization is needed for low level as well
Available in all regions
Can not be easily migrated out of AWS into any MongoDB, you need to write a code to transform
Support global tables, which protect users against regional failure. Data is automatically replicated across multiple AZs in a single region.
We are using Aurora RDS that's provisioned and configured via Cloudformation. This is the relevant snippet from the template:
Type: AWS::RDS::DBCluster
Properties:
DeletionProtection: true
Engine: aurora
EngineVersion: 5.6.1
Now we want to upgrade the engine version to say 5.6.2. The doc says that Update on EngineVersion requires a replacement which means wiping out all data. Is there a way to safely update the version?
Use aws rds modify-db-cluster API in this case instead of depending on Cloudformation's "stack modification". Resource modification's that are marked as "requires Replacement" depends on the service to handle the replacement. It might be worth trying that on a test cluster - just to see how Aurora's cloudformation support handles it. But if this is a one time fix up, then I highly recommend against it. Just use the CLI, fix your cluster, and then fix your CFN templates. Hope this helps!
When updating a Cloudformation EC2 Container Service (ECS) Stack with a new Container Image, is there any way to control the timeout so if the service does not stabilize it rolls back automatically?
The UpdatePolicy attribute which is part of the Auto Scaling Group does not help since instances are not being created.
I also tried a WaitCondition but have not been able to get that to work.
The stack essentially just stays in the UPDATE_IN_PROGRESS state until it hits the default timeout (~3 hours), or you trigger a Cancel the update.
Ideally we would be able to have the stack timeout after a short period of time.
This is what my Cloudformation template looks like:
https://s3.amazonaws.com/aws-rga-cw-public/ops/cfn/ecs-cluster-asg-elb-cfn.yaml
Thanks.
I've created a workaround for this problem until AWS creates a ECS UpdatePolicy and CreationPolicy that allows for resourcing signaling:
Use AWS::CloudFormation::WaitCondition with a Macro that will create new WaitCondition resources when the service is expected to update. Signal the wait condition with a non-essential container attached to the task.
Example: https://github.com/deuscapturus/cloudformation-macro-WaitConditionUpdate/blob/master/example-ecs-service.yaml
The Macro for the above example can be found here: https://github.com/deuscapturus/cloudformation-macro-WaitConditionUpdate
My workaround for this problem is that before triggering an update stack, run a script in the background
./deployment-breaker.sh &
And for the script
#!/bin/bash
sleep 600
$deploymentStatus = (aws cloudformation describe-stack --stack-name STACK_NAME | jq XXX)
if [[ $deploymentStatus == YOUR_TERMINATE_CONDITION ]]then
aws cloudformation cancel-update-stack --stack-name STACK_NAME
fi
If your WaitCondition is in the original create you need to rename it (and the Handle). Once a waitcondition has been signaled as complete, it will always be complete. If you rename it and do an update, the original WaitCondition and Handle will be dropped and the new ones created created and signaled.
If you don't want to have to modify your template you might be able to use Lamba and Custom resources to create a unique WaitCondition via the aws cli for each update.
It's not possible at the moment with the provided CloudFormation types. I have the same problem and I might create a custom CloudFormation resource (usineg AWS Lambda) to replace my AWS::ECS::Service.
The other alternative is to use nested stacks to wrap the AWS::ECS::Service resources — it won't solve the problem, but it at least will isolate the individual service and the rest of the stack will be in a good state. My stacks have multiple services and this would help, but the custom resource is the best option so far (I know other people that did the same thing).