What's the difference between update stack and change set in AWS CloudFormation - aws-cloudformation

We can update the stack either by clicking on 'update' or 'selecting change set for the current stack', was curious to know what is the difference between these 2 options.

Short answer:
Update stack - for immediate deployment
Create Change set for current stack - to prepare the changes and review them before deploying them.
Detailed answer:
From AWS website -
AWS CloudFormation provides two methods for updating stacks: direct update or creating and executing change sets.
When you directly update a stack, you submit changes and AWS CloudFormation immediately deploys them. Use direct updates when you want to quickly deploy your updates.
With change sets, you can preview the changes AWS CloudFormation will make to your stack, and then decide whether to apply those changes.
Change sets are JSON-formatted documents that summarize the changes AWS CloudFormation will make to a stack.
Use change sets when you want to ensure that AWS CloudFormation doesn't make unintentional changes or when you want to consider several options.
For example, you can use a change set to verify that AWS CloudFormation won't replace your stack's database instances during an update.*

Related

Prevent users from creating new work items in Azure DevOps

I've been looking at organisation and project settings but I can't see a setting that would prevent users from creating work items in an Azure DevOps project.
I have a number of users who refuse to follow the guidelines we set out for our projects so I'd like to inconvenience them and the wider project team so that they find it better to follow the guidelines than not - at the moment we've got one-word user stories and/or tasks with estimates of 60-70 hours which isn't reflective of the way that we should be planning.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
The Azure DevOps Aggregator project allows you to write simple scripts that get triggered when a work item is created or updated. It uses a service hook to trigger when such an event occurs and abstracts most of the API specific stuff away, providing you with an instance of the work item to directly interact with.
You can't block the creation or update from, such a policy, Azure DevOps will inform the aggregator too late in the creation process to do so, but you can revert changes, close the work item etc. There are also a few utility functions to send email.
You need to install the aggregator somewhere, it can be hosted in Azure Functions and we provide a docker container you can spin up anywhere you want. Then link it to Azure DevOps using a PAT token with sufficient permissions and write your first policy.
A few sample rules can be found in the aggregator docs.
store.DeleteWorkItem(self);
should put the work item in the Recycle Bin in Azure DevOps. You can create a code snippet around it that checks the creator of the work item (self.CreatedBy.Id) against a list of known bad identities.
Be mindful that when Azure DevOps creates a new work item the Created and Updated event may fire in rapid succession (this is caused by the mechanism that sets the backlog order on work items), so you may need to find a way to detect what metadata tells you a work item should be deleted. I generally check for a low Revision number (like, < 5) and the last few revisions didn't change any field other than Backlog Priority.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
I am afraid there is no such out of setting to do this.
That because the current permission settings for the workitem have not yet been subdivided to apply to the current scenario.
There is a setting about this is that:
Project Settings->Team configuration->Area->Security:
Set this value to Deny, it will prevent users from creating new work items. But it also prevent users from modify the workitem.
For your request, you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions.

How can I look up an existing Internet Gateway in CDK?

I'm using the FromLookup() method on the Vpc construct to get a reference to the default VPC in an account like this:
Vpc.FromLookup(this, "Default VPC", new VpcLookupOptions {IsDefault = true}); (C#)
Is there a way to do something similar for the Internet Gateway (IGW) that's created by default in that VPC? Alternatively, can I list the IGWs for an existing VPC? I need to get a reference to that IGW in order to add routes to it.
I came across this GitHub issue which shows a workaround using a Cfn escape hatch to get a reference to the existing IGW using its ID, but the need to manually look up and provide the ID breaks the automation we're trying to achieve. We need to spin up copies of these stacks in dozens of isolated accounts and having a manual lookup step is a deal breaker.
Also, the PR that addresses that issue only allows getting references for IGWs in new VPCs created as part of the stack, not existing ones.

Extending Azure multistage yaml pipelines logs

I'm trying to log completion of each stage of multi stage yaml pipelines with some custom details.
How can i add custom details to https://dev.azure.com//_settings/audit logs.
Is there a way to persist this information in sqldb or any other persistant storage option.
How can i subscribe to the these log events.
How can i add custom details to
https://dev.azure.com//_settings/audit logs.
I'm afraid this does not available for you to achieve.
Because the sentence format of details is defined and fixed by our backend class. Once the corresponding action occurred, beside the action class, the event method will also be called to generate and record the log into audit page. These are all finished by backend. And we haven't expose this operation permission to users until now.
But based on my own, this is a good idea that we may consider to expand. Because Customized details can make the details more readable for the company. You can raise your idea here, vote and comment it. Our corresponding Product Group review these suggestion regularly, and consider to take into our develop roadmap depend on its priority(votes).
How can i subscribe to the these log events.
There has one important thing that I need let you know, the audit log only keep for 90 days. And it will be cleared after 90 days, including our backend database. The nutshell is, if you want the audit logs which more than 90 days, we also have no idea to help on restore that.
So I suggest you can configure one scheduled pipeline with Powershell task.
In this powershell task, run this api to get and then store it with any file type you want, e.g .csv, .json and etc.
For the schedule value, you can set it as any time period you like. Just less than 90 days, so that it do not make you lose any audit event log.
Is there a way to persist this information in sqldb or any other
persistant storage option.
If you can use a different database, I'd better suggest you consider to using a document storage solution such as CouchDB, DynamoDB or MongoDB.
Depend on your actual used, you can make use of Command line task with self-agent, to execute corresponding storing commands.
For sample, what I used is MongoDB and I can run below commands to store the JSON file that api generated previously:
C:\>mongodb\bin\mongoimport --jsonArray -d mer -c docs --file audit20191231.json

Google Cloud SQL Database Delete Protection

I would like the ability to protect against the deletion of a cloud SQL instance. This seems like a good step to take to avoid actions from an angry employee or a regretful click.
Google added a deletion protection flag for Cloud SQL in August 2022.
https://cloud.google.com/sql/docs/mysql/deletion-protection
I couldn't find anything like literally protecting the instance vs deletion, but, you could use the predefined roles in your instance to try to protect your instances from, as you said, angry employees.
For example:
Keeping the role owner to yourself (assuming you are, indeed, the owner of this project).
Depending on the needs of the employees, you can probably assign them the role cloudsql.editor or similar. If this is too much, you can create your own custom roles to narrow down what you need.
As for a regretful click, there is no much you can do. You could regularly create an export and save it on one of your buckets, just in case you need to create again your instance after a 'regretful' click.
Well, terraform certainly seems to have added some kind of deletion protection on the GCP sql instance. When I try to "terraform destroy" , I get this error
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
Perhaps this functionality was added after the OP had reported the issue - which is quite possible given how old this thread is.
A related issue which talks about this.

Why should I store kubernetes deployment configuration into source control if kubernetes already keeps track of it?

One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.
You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.
Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.