I'd like to create CloudFormation stack with resources in multiple regions. Is this possible? - aws-cloudformation

Is it possible to create a single Amazon CloudFormation stack template that instantiates an AWS::EC2::Instance in ap-southeast-1 and another AWS::EC2::Instance in us-west-2 for example?
I suspect not, but I've not yet found a definitive yes/no saying that stacks can't have resources spanning multiple regions.

The accepted answer is out of date. It is now possible to create stacks across accounts and regions using CloudFormation StackSets.

A very good question; but I don't think you would be able to create resources spread across multiple regions.
The end point URL for CloudFormation is region based and AFAIK there isn't a place whether you can specify an region specific (diff region) information.
As of today you can compose the CloudFormation template in such way to make it region independent by leveraging the mappings section and get::region function; but making the template spread across multiple regions simultaneously wouldn't be possible; but can be expected down the line.

Your best bet right now would be to use a Cloudformation Custom Resource that invokes a Lambda function in order to create the resources that are in other regions. When you run the CFN template it would invoke the Lambda function where you'd create code (Python, Node.js or Java) that leverages the AWS SDKs to create the resources you need. CFN Custom Resources allow you to pass parameters to the function and get "outputs" back from them so from a CFN perspective you can treat it just like any other resource.
Here's a walkthrough example from the AWS docs: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html

You can create a lambda function invoking to create a resource in another region, and even making your lambda function to invoke another stack in the other region.
To make your life easy, in this case you can use the lambda cli2cloudformation (https://github.com/lucioveloso/cli2cloudformation).
Using it, you can execute CLI commands inside your lambda, and by this way, you specific the --region in the command.
It's also interesting, because you will be able to set a command when your stack is created, updated and deleted.
"myCustomResource": {
"Type": "Custom::LocationConstraint",
"Properties": {
"ServiceToken": "arn:aws:lambda:eu-west-1:432811670411:function:cli2cfn_proxy2",
"CliCommandCreate": "s3api get-bucket-location --bucket my-test-bucket --region eu-west-1",
"CliCommandUpdate": "",
"CliCommandDelete": ""
}
},

Related

Dynamically generate cloudformation resources using CDK

I'm trying to dynamically generate SNS subscriptions in CDK based on what we have in mappings. What's the best way to do this here? I have mappings that essentially maps the SNS topic ARNs my queue want to subscribe to in each region/stage. The mapping looks something like this:
"Mappings":
"SomeArnMap":
"eu-west-1":
"beta":
- "arn:aws:sns:us-west-2:0123456789:topic1"
"gamma":
- "arn:aws:sns:us-west-2:0123456789:topic2"
- "arn:aws:sns:us-west-2:0123456789:topic3"
How do I write code in CDK that creates a subscription for each element in the list here? I can't get regular loop to work because we don't know the size of the list until deployment. After CDK synth, it would just give me tokens like #{Token[TOKEN.264]} for my topic ARN.
Is it even doable in CDK/CloudFormation? Thanks.
Since tokens aren't resolved during the runtime of aws-cdk code, usually you can use cfn intrinsic functions which declare some sort operation on the token in your template. These are accessible in #aws-cdk/core.fn. However, cfn doesn't have intrinsics for looping over values, only selecting values from a list/map.
If your cdk has these mappings in its output template and you just want to extract a value for reference when building another construct Fn.findInMap I believe should do that.
const importedTopic = Sns.Topic.fromTopicArn(this, "ImportedTopicId", Fn.findInMap("SomeArnMap", "eu-west-1", "beta"));
importedTopic.addSubscription(SomeSqsQueueOrSomething);

block a rundeck node from arbitrary cloud and non-cloud resource discovery?

Is there a way to block arbitrary nodes being reported/discovered/red-status in rundeck? With all the sources feeding in (GCP plugin, resources.xml, etc.) I have often found a job status which applies to "all" is red since the individual instance isn't yet configured, giving a red status to the job.
Would be great if there were a way to do an easy block from the GUI and CLI for all resources for the given node.
You can use custom node-filters rules based on nodes status using health check status (also you can filter by name, tags, ip address, regex, etc). Take a look at this (at "Saving filters" section you've a good example).
Do a .hostnamepattern. in the exclude filter in the job and hit Save.
Simplify-simplify-simplify.

Automatically rotate AppSync API key

CloudFormation provides an AWS::AppSync::ApiKey resource type for creating an AppSync API key in a CloudFormation stack. The API key will expire. Is there a simple way to define a rotation schedule within CloudFormation? I don't see anything, but it seems like such an obvious use case that I'm not sure what good the AWS::AppSync::ApiKey resource type is without it.
Currently I have a lambda that runs on a schedule to generate a new key and store it in SecretsManager. This works, but it's an extra step, and I have to run the lambda manually the first time. I am open to alternatives.
You don’t want to create an AWS::AppSync::ApiKey. Instead make a AWS::SecretsManager::Secret and a AWS::SecretsManager::RotationSchedule. The RotationSchedule will let you use a lambda to automatically rotate the ApiKey and store it in the Secret.
Ultimately, the AWS::AppSync::ApiKey is of little practical use for you because you will need to deal with the expiration.

is it possible to copyObject from one cloud object storage instance to another. The buckets are in different regions

I would like to use the node sdk to implement a backup and restore mechanism between 2 instances of Cloud Object Storage. I have added a service ID to the instances and added a permissions for the service id to access the buckets present in the instance i want to write to. The buckets will be in different regions. I have tried a variety of endpoints both legacy and non-legacy private and public to achieve this but i usually get Access Denied.
Is what I am trying to do possible with the sdk? if so can someone point me in the right direction?
var config = {
"apiKeyId": "xxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxx",
"endpoint": "s3.eu-gb.objectstorage.softlayer.net",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxx:xxxxxxxxxxx::",
"iam_apikey_name": "auto-generated-apikey-xxxxxxxxxxxxxxxxxxxxxx",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/0xxxxxxxxxxxxxxxxxxxx::serviceid:ServiceIdxxxxxxxxxxxxxxxxxxxxxx",
"serviceInstanceId": "crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxx::",
"ibmAuthEndpoint": "iam.cloud.ibm.com/oidc/token"
}
This should work as long as you are able to properly grant the requesting user access to be able to read the source of the put-copy, so long as you are not using KeyProtect based keys.
So the breakdown here is a bit confusing due to some unintuitive terminology.
A service instance is a collection of buckets. The primary reason for having multiple instances of COS is to have more granularity in your billing, as you'll get a separate line item for each instance. The term is a bit misleading, however, because COS is a true multi-tenant system - you aren't actually provisioning an instance of COS, you're provisioning a sort of sub-account within the existing system.
A bucket is used to segment your data into different storage locations or storage classes. Other behavior, like CORS, archiving, or retention, acts on the bucket level as well. You don't want to segment something that you expect to scale (like customer data) across separate buckets, as there's a limit of ~1k buckets in an instance. IBM Cloud IAM treats buckets as 'resources' and are subject to IAM policies.
Instead, data that doesn't need to be segregated by location or class, and that you expect to be subject to the same CORS, lifecycle, retention, or IAM policies can be separated by prefix. This means a bunch of similar objects share a path, like foo/bar and foo/bas have the same prefix foo/. This helps with listing and organization but doesn't provide granular access control or any other sort of policy-esque functionality.
Now, to your question, the answer is both yes and no. If the buckets are in the same instance then no problem. Bucket names are unique, so as long as there isn't any secondary managed encryption (eg Key Protect) there's no problem copying across buckets, even if they span regions. Keep in mind, however, that large objects will take time to copy, and COS's strong consistency might lead to situations where the operation may not return a response until it's completed. Copying across instances is not currently supported.

Create a KMS custom Key in CloudFormation template for different region

Is there any way to generate a custom KMS Key via CloudFormation template in a different region than the region which is specified in the respective AWS User account you use to run the template?
Merci A
Short answer:
No, not directly.
Long answer:
It can actually be done in one of two ways. First, using StackSets, you can create a single template that will be deployed in selected accounts (1 in this occurence) and regions.
The second way to achieve your goal is to use a Custom Resource to create your KMS keys in other regions. This custom resource will invoke a Lambda function to handle the lifecycle of your KMS keys. Within this Lambda you will have to call the appropriate APIs to create/update/delete the KMS keys in the desired region.