Is there a way to validate CloudFormation templates before running them? - aws-cloudformation

I would like to validate my CloudFormation templates before running them. I know about the aws cloudformation validate-template ... cli command, but that ignores incorrect property names. I don't know what the point of that cli command is if it won't catch these kind of mistakes.
I want something that will catch those kind of mistakes before running the templates. An IDE or external service that does this would be fine.

We had a quite similar issue with erroneous Cloud-Formation templates and created (I’m a co-author) a command-line tool, that validates them - besides the standard AWS validation it also has many custom checks, that were essential for us:
https://github.com/Appliscale/perun
I believe it doesn't support property names validation yet, but any feature requests (or pull request even better), are welcome. We will do our best to address them as soon as we can.
After installing Perun, to validate the template you can use the command validate:
~ $ perun validate <PATH TO THE TEMPLATE>
Moreover, it also allows managing (creation, updates etc.) CF stacks and monitoring the status updates.

The cfn-lint tool was built for this exact purpose. It is actively maintained by the AWS team and it has a couple of IDE integrations.

Same issue with me. There is no way to validate the property name. But you can reduce the mistake using Atom IDE with plugins cloudformation, it helps me to create a resources property so I can reduce my typo mistakes.

Related

gcloud components list not up-to-date?

Context
I will rely on a component shipped with Gcloud SDK CLI.
For migration purposes and other reasons, I want to know which version of the cloud-sdk starts shipping this component and avoid the "install the latest version".
My issue
In the official changelog: https://cloud.google.com/sdk/docs/release-notes, there are no references to the component I am looking for.
Tries
I have tried to run this command naively
for gcloud_version in 390.0.0-alpine 391.0.0-alpine 392.0.0-alpine 393.0.0-alpine; do
echo "---> ${gcloud_version}"
docker run --rm -ti google/cloud-sdk:${gcloud_version} gcloud components list
done
Unfortunately, every list do not show the component I am waiting for (even the latest version of gcloud, 393 at the time of writting).
Discovery
However, when I run gcloud components install MY_UNLISTED_COMPONENT it works ...
Not a very reliable way to find out which version has the component I want.
Do you know if:
this is an issue?
I can report this somewhere?
It is relevant to do it?
Thanks for your help!
From #DazWilkin
It would be helpful if you included the name of the public albeit unlisted component in your question. The Release Notes includes a "Send Feedback" option and you may want to provide this feedback there.
it's reasonable to expect it to be documented. I encourage you to send feedback via the release page and to consider filing an issue on Google's public Issue Tracker.

Tagging AMI on AWS Regions

I am trying to tag an AWS AMI that is given to me by another team. The AMI is showing under "Private Images". I cant seem to tag it with terraform even though the whole environment is built on terraform. Have you encountered issue like this? Any tool will help, I was also looking into packer however, packer does not seem to tag the image that it does not create.
I tried python script and bash script, but they are becoming difficult to manage when you have 6 tags.
For example in python, I have to
Key = "environment"
Value = "dev"
So this becomes difficult. Any suggestion would be appreciated
You can only tag using Terraform while resource creation or modification. you can write Python code to do this.
I can help you if needed.
please share the requirements in details with screenshot.

How should I trigger ARM deployments

We are trying to establish a continuous deployment environment. Conflicted how to do ARM deployments. Deploying all the resources as a group is much better them handling them individually.
ARM has a nice declarative syntax. We are telling what we intend to create" without having to write the sequence of programming commands to create it. Which is great but how should we run them ?
Two options come up to my mind
I.I could download the templates and use power shell.
II. Trigger using Azure automation
III. x
What is the best practice ?
Reference
Octopus integration from source code
If you're doing this as part of your CI/CD chain, you probably want to check in the templates and deployment scripts with your source code. That way, the definition of the infrastructure is kept with the code that's intended to run on it.
If this is part of some other workflow, it really depends on the workflow :)
I would suggest using powershell\cli and just invoke the template from the uri, that is the easiest way of doing that (instead of downloading it). This can be run with anything that is capable of running a custom script task, or specific CI\CD systems that have steps to deploy ARM Template (VSTS\Octopus\probably something else)
I would advice against Azure Automation for that cause.
Also, I do suggest separate code from arm templates

One click deployment using scripts

I want to deploy a web solution for local server using one click deployment using powershell or any other scripts.
Can any one share any ideas
Powershell can be use in conjunction with Psake which is a DSL that allows you to script up deployments (or basically anything really) with a dependency chain. It also abstracts MSDeploy to some extent, making it easier to roll out installs to IIS. Note that MSDeploy can also be used completely independently for relatively simple deployments (such as web sites without any reliance on messaging queues, databases, supporting services, etc.)
Other automated approaches include the likes of Octopus Deploy which works by having a central management node push out installations to 'agents' installed on target machines.
Both approaches require you to write your app in a reasonably deployable manner (e.g. having suitably transformable configurations files)
Does that help? There are a number of other options out there but these should help to point you in the right direction.
Also check http://psappdeploytoolkit.codeplex.com/ (seems to be what you want)
and maybe https://github.com/mislav/git-deploy or https://github.com/p-blomberg/Web-app-deploy-script
Try this approach... this is not the compete but will help you in moving one click direction..
http://ravisoftltd.wordpress.com/2014/04/08/one-click-deployment-with-sharepoint/
If you are trying to deploy from MSBUILD-files (so something like asp or MVC), I would like to point you to Package-Web.
It still has some minor flaws (which can be worked around pretty easily), but works pretty good.
Only downside that I know of: you have to prepare your project by installing a nuget-package (or get those files into your build process some other way)
You can do it using a powershell script with something like
[string] $package = "solution.wsp"
stsadm -o addsolution -filename $package
stsadm -o deploysolution -name $package -immediate -allowGacDeployment

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.