I want to convert the existing terraform templates(hcl) to aws cloudformation templates(json/yaml).
I basically want to find security issues with these templates through CFN_NAG.
An approach that I have already tried was converting HCL to JSON and then passing the template to CFN_NAG but I received a failure since both the templates have different structure.
Can anyone please provide any suggestions here?
A rather convoluted way of achieving this is to use Terraform to stand-up actual AWS environments, and then to use AWS’s CloudFormer to extract CloudFormation templates (JSON or YAML) from what Terraform has built. At which point you can use cfn-nag.
CloudFormer has some limitations, in that not all AWS resources are currently supported (RDS Security Groups for example) , but it will get you all the basic AWS resources.
Don't forget to remove all the environments, including CloudFormer's, to minimise the cost.
You want to use static code analysis to find security issues in your Terraform setup.
Trying to converting Terraform to CloudFormation to later use cfn-nag is one way. However, there exist tools now that directly operate on the Terraform setup.
I would recommend to take a look at terrascan. It is built on terraform_validate.
https://github.com/bridgecrewio/checkov/ runs security scanning for both terraform and cloudformation
Related
I'm working with a customer that wants all the IaC in CloudFormation. I'm not good at CloudFormation. I wonder if it is a common practice to create the IaC using AWS CDK, generate the CloudFormation equivalent, and use it?
Is this something that is commonly done ?
Recently, I went to the AWS Proton service, I also tried to do a hands-on service, unfortunately, I was not able to succeed.
What I am not able to understand is what advantage I am getting with Proton, because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation.
It will be great if someone could jot down the use cases where Proton can be used compared to the components which I suggested above.
From what I understand, AWS Proton is similar to AWS Service Catalog in that it allows
administrators prepare some CloudFormation (CFN) templates which Developers/Users can provision when they need them. The difference is that AWS Service Catalog is geared towards general users, e.g. those who just want to start a per-configured instance by Administrators, or provision entire infrastructures from the set of approve architectures (e.g. instance + rds + lambda functions). In contrast, AWS Proton is geared towards developers, so that they can provision by themselves entire architectures that they need for developments, such as CICD pipelines.
In both cases, CFN is used as a primary way in which these architectures are defined and provisioned. You can think of AWS Service Catalog and AWS Proton as high level services, while CFN as low level service which is used as a building block for the two others.
because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation
Yes, in both cases (AWS Service Catalog and AWS Proton) you can do all of that. But not everyone want's to do it. Many AWS users and developers do not have time and/or interest in defining all the solutions they need in CFN. This is time consuming and requires experience. Also, its not a good security practice to allow everyone in your account provision everything they need without any constrains.
AWS Service Catalog and AWS Proton solve these issues as you can pre-define set of CFN templates and allow your users and developers to easily provision them. It also provide clear role separation in your account, so you have users which manage infrastructure and are administrators, while the other ones users/developers. This way both these groups of users concentrate on what they know best - infrastructure as code and software development.
I am trying to setup a module to deploy resources in the cloud (it could be any cloud provider). I don't see the advantages of using templates (ie. the deploy manager) over direct API calls :
Creation of VM using a template :
# deployment.yaml
resources:
- type: compute.v1.instance
name: quickstart-deployment-vm
properties:
zone: us-central1-f
machineType: f1-micro
...
# bash command to deploy yaml file
gcloud deployment-manager deployments create vm-deploy --config deployment.yaml
Creation of VM using a API call :
def addInstance(http, listOfHeaders):
url = "https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances"
body = {
"name": "quickstart-deployment-vm",
"zone": " us-central1-f",
"machineType": "f1-micro",
...
}]
bodyContentURLEncoded = urllib.urlencode(bodyContent)
http.request(uri=url, method="POST", body=body)
Can someone explain to me what benefits I get using templates?
readability\easy of use\authentication handled for you\no need to be a coder\etc. There can be many advantages, it really depends on how you look at it. It depends on your background\tools you use.
It might be more beneficial to use python all the way for you specifically.
It's easier to use templates and you get a lot of builtin functionality such as running a validation on your template to scan for possible security vulnerabilities and similar. You can also easily delete your infra using the same template as you create it. FWIW, I've gone all the way with templates and do as much as I can with templates and in smaller units. It makes it easy to move out a part of the infra or duplicate it to another project, using a pipeline in GitLab to deploy it for example.
The reason to use templates over API calls is that templates can be used in use cases where a deterministic outcome is required.
Both Template and API call has its own benefits. There is always a tradeoff between the two options. If you want more flexibility in the deployment, then the API call suits you better. On the other hand, if the security and complete revision is your priority, then Template should be your choice. Details can be found in this online documentation.
When using a template, orchestration of the deployment is handled by the platform. When using API calls (or other imperative approaches) you need to handle orchestration.
When we were building our AWS account, we did not think about using cloud formation or terraform. Now we have our environmemt all setup but don't want to tear down everything and build using cloud formation or terraform. So is there a way we can get our infrastructure to be imported and managed through one of them?
Thanks,
Terraform supports import, but that only supports the present state into state file. You still need to write the code. Cloudformation does not support import.
Something like https://github.com/dtan4/terraforming can be of help but YMMV.
A pretty complete answer could be found at AWS Export configuration as cloudformation template, which also covers Terraform for this purpose.
TL;DR
AWS Import/Export configuration as code (CloudFormationn | Terraform).
Based on our Infrastructure as Code (IaC) experience we found several ways to translate existing manually deployed (from Web Console UI) AWS infra to Cloudformation (CF) and / or Terraform (TF) code. Posible solutions are listed below:
AWS Cloudformation Templates
CF-1 | AWS CloudFormation native import feature
CF-2 | aws cli & manually translate to CF
CF-3 | Former2
CF-4 | AWS CloudFormer
Terraform Code / Modules
TF-1 | Terraforming
TF-2 | CloudCraft + Modules.tf
Related Article: https://medium.com/#exequiel.barrirero/aws-export-configuration-as-code-cloudformation-terraform-b1bca8949bca
As per October 2019, AWS supports importing legacy resources into CloudFormation. See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html for examples.
I'm new to Terraform -- I've looked in the documentation here: https://www.terraform.io/docs/providers/aws/r/redshift_cluster.html
...but I don't see an option to enable cross-region snapshots for Redshift clusters using a Terraform template. Seems like a simple option to implement, and a critical feature for us.
Currently not possible. Here's an open issue asking for this feature
If you absolutely need to do this from terraform, you could use a null_resource with a local-exec provisioner and run a local script that calls enable-snapshot-copy