I'm wondering if it makes more sense to put all IAM roles and policies into one template in a nested stack to keep them easier maintainable or if good practise is saying thats rather detrimental for whatever reason and to put the policies to the specific templates where the resources are created. Which way is better. for the sake of good order i would go with the ONE template cause the idea seems alright. I would be grateful for shared experiences in this matter.
Merci A
We have recently templatized the whole AWS infrastructure using cloudformation. And, I would keep the IAM roles and policies closer to the application stack rather than in one template. I will try to explain my reasons.
We have a separate AWS account for every TeamEnv.
What is a TeamEnv?
If we have 3 teams e.g. A, B, and C.
And 3 environments e.g. Dev, Staging, and Prod.
Then we have 9 TeamEnv: A-Dev, A-Staging, A-Prod and so on for every other teams. So, in total, we have 9 AWS accounts. This is done to set accountability as well as transparency of resources.
And, here's how we did it. We divided the stacks into these categories:
Common AWS Cloudformation Stacks
TeamEnv Specific AWS Cloudformation Stacks
Common AWS Cloudformation Stacks:
These are the stacks that would be common for all teams and their environments:
IAM Sub User Account Stack - This stack creates an IAM Sub Account with Administrator access rights.
Generic VPC Stack - This stack creates the VPC and its components as per the company's standards.
VPC Peering Stack - This stack is to peer VPCs.
VPC Peering Role Stack - This stack creates a VPC role required to peer.
Team Specific Stacks:
ELB Stack - It is dependent on Generic VPC stack and imports the exported values from it, like VPCId.
Service Specific Stack - It is dependent on Generic VPC stack and ELB Stack and imports various exported values. We have one stack for every micro-service and it consists of everything that it needs to take the service to a ready state. It includes s3 buckets, SQS, InstanceRole, etc.
That's where we manage the IAM roles and policies. It's easier to manage and audit.
However, in hindsight, I would have kept a separate stack for IAM policies, which are commonly used and referenced in other roles, to avoid duplicated in-line policies.
Related
Reading from the documentation, the suggestion for passing values between CDK Stacks within an app is to simply pass the value.
If the two stacks are in the same AWS CDK app, just pass a reference between the two stacks. For example, save a reference to the resource's construct as an attribute of the defining stack (this.stack.uploadBucket = myBucket), then pass that attribute to the constructor of the stack that needs the resource.
But it seems this only works if the CDK stacks are within one account.
Upon checking the generated templates, it generates a stack Output and Input and uses that for the passing values. And stack input and outputs does not work beyond the account they are created on.
What's the recommended way to pass values from stacks deployed in different accounts?
I don't think you can think about this as being a single CDK application. Such a single application is intended to be deployed in a single account. What you are trying to do is use this application construct to deploy two different stacks in two different environments and share data between them. However, you are bound to the same restrictions that CloudFormation itself has when it comes to sharing data from services that have been deployed in the stack. So you'll have to work around this issue.
So I don't think there is any recommended way of doing this, but maybe you can create some cross-account roles that allow writing/reading from the SSM parameter store and combine this with custom resource lambdas to read/write the data from/in the SSM store of the other account. Given this, it might be easier to just write some CICD tooling that does this without needing any AWS services and which just passes on the value from the output of one stack to the input of the other stack during deployment.
Using nested stacks is a best practice in AWS CloudFormation, and indeed they solve many problems (code reuse, stack limits, etc).
It's also generally a good idea to do any sort of updates with the minimal access necessary for that update (using the RoleARN of the UpdateStack command). I can't seem to find any documentation on exactly IAM access is necessary to update a stack that has nested stacks.
As described here, a stack update will always get the template for the nested stack again.
(in addition to any rights necessary for the resources that are to be changed),s3:GetObject (or s3:GetObjectVersion if a versioned url is used) is necessary for the location where the template for the nested stack is hosted.
In addition (and I'm not sure why), an iam:GetRole is necessary for role to self-inspect (so the Resource should be the Arn of the role itself).
I am currently pending between using terraform and CloudFormation.
There is a question I haven't seen the answer yet (or maybe, I just haven't found it yet).
In terraform, you give a precise name to everything. This will delete the targets with those names.
But what about CF? If we already have an architecture in place and I want to add/delete an instance and use CF, how will this work? How will it know after which one to target?
I hope this question makes sense! I've already used terraform, but never before CloudFormation.
CloudFormation uses two mechanisms to identify its resources. The CFN template has a list of resources it created, it uses the actual ID, not a pretty name, and CFN also tags the resources (that support tags) with the stack ID.
CFN cannot be used to delete the resources in a different stack, only the stack that created them can manage them. Terraform allows you to import resources created by anything else into a new stack where they will be managed.
I used CFN for a year before converting to Terraform (also for a year now) and I'll never go back to CFN. Terraform offers many advantages over CFN that make CFN really hard to use now. Features such as plan before apply, re-usable modules, resource imports, granular output (CFN is mostly a black box), and generally faster AWS feature support (usually APIs are released at launch day and Terraform support follows soon after, /usually/ faster than CFN but not always).
I want to give a service account read-only access to every bucket in my project. What is the best practice for doing this?
The answers here suggest one of:
creating a custom IAM policy
assigning the Legacy Bucket Viewer role on each bucket
using ACLs to allow bucket.get access
None of these seem ideal to me because:
Giving read-only access seems too common a need to require a custom policy
Putting "Legacy" in the name makes it seem like this permission will be retired relatively soon and any new buckets will require modification
Google recommends IAM over ACL and any new buckets will require modification
Is there some way to avoid the bucket.get requirement and still access objects in the bucket? Or is there another method for providing access that I don't know about?
The closest pre-built role is Object Viewer. This allows listing and reading objects. It doesn't include storage.buckets.get permission, but this is not commonly needed - messing with bucket metadata is really an administrative function. It also doesn't include storage.buckets.list which is a bit more commonly needed but is still not part of normal usage patterns for GCS - generally when designing an app you have a fixed number of buckets for specific purposes, so listing is not useful.
If you really do want to give a service account bucket list and get permission, you will have to create a custom role on the project. This is pretty easy, you can do it with:
gcloud iam roles create StorageViewerLister --project=$YOUR_POJECT --permissions=storage.objects.get,storage.objects.list,storage.buckets.get,storage.buckets.list
gcloud projects add-iam-policy-binding $YOUR_PROJECT --member=$YOUR_SERVICE_ACCOUNT --role=StorageViewerLister
I am working to learn AWS Cloud Formation. The AWS Getting Started doc provides examples, but I would like to know whether a stack on AWS can be maintained by more than 1 template. At my company we have existing stacks. Can I use a template to create additional resources in an existing stack?
Most of the time: 1 template == 1 stack.
If you're using nested stacks, then you can have 1 template for multiple stacks.
But having multiple templates describing the resources needed in 1 stack isn't possible (nor logic if you ask me). If you want to create additional resources for an existing stack: just modify the template and use the "update stack" functionality. (If you can't find the template that was used to create the stack, you can fetch it from the console when selecting the stack or by using the get-template api)