I am trying to write a BLUE/GREEN CFT that tears down and rebuilds the EC2 Instances, ELB and Update the Route53 record Alias with this updated DNS name of this ELB.
If the Alias Record DOESNT exist, I'm able to create the Alias Record Set correctly after the EC2 instances are created and the ELB attaches these instances. But If the recordset exists with the old ELB DNS Name, the CFT is failing with "Alias RecordSet exists". Naturally - am looking to UPDATE this record with the updated ELB DNS name on running the full CFT. Any suggestions?
"HostRecord" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"HostedZoneName" : "REDACTED",
"Comment" : "Updates the ELB DNS name into Route 54 recordset.",
"Name" : "REDACTED",
"Type" : "A",
"AliasTarget" : {
"DNSName" : { "Fn::GetAtt" : [ "ESClusterELB" , "DNSName" ] },
"HostedZoneId" : { "Fn::GetAtt" : [ "ESClusterELB" , "CanonicalHostedZoneNameID" ] }
}
Managing a single resource (such as a RecordSet) from 2 different CloudFormation stacks is not supported.
I have a few recommandations for your use-case:
I recommend you manage the record independently from the templates that you're using for blue/green. Once green is created/updated and you want your record to resolve for the green ELB, you can just update the stack that govern the RecordSet, setting it to the appropriate alias.
Using the same base as the first suggestion. You could automate this using the SNS notification triggered by CloudFormation when a stack is created/updated. Using this in conjunction with a Lambda you could dynamically update the stack that controls the RecordSet.
You could create a custom resource that solely serve the purpose of updating the record set to the wanted alias.
Your problem seems to revolve around creating two CloudFormation stacks with conflicting resources which cannot coexist. One way to approach this is to always create the alias records in such a way that they can coexist.
An approach that should allow that is to create a weighted routing type. Set the weight of the recordset in both stack instances to 1 and set the recordset ID to "blue" or "green" respectively.
Now you should be able to deploy both CFN stacks side by side without conflict. If the blue stackinstance is active and the green is not, all dns responses will return the blue alias. When you then activate green, it will create a recordset alongside the blue and should start to take about half of the traffic. Now if you deactivate the blue stack, green will take over all traffic.
This does mean you need to disable blue to test green in complete isolation, which is perhaps a little inconvenient and may slow down rollback. You could have a two-phase deployment, where you keep the weights as stack parameters, then once green is deployed with weight=1, redeploy blue with weight=0 to take it out of dns without tearing it down. If green is bad, you can deactivate it and blue with weight zero should take over.
Weighted routing is only one routing type option, you could also look at multi-value answers, failover or even geolocation.
Just to get creative. You could also set a parameter that sets a condition in your CF that will execute execute different portions for your Route53 For example, having a condition to of your CF. CREATE, DELETE, IGNORE. Something like that.
Related
Is there such possibility from database backend to force user to read only from SECONDARY members ?
I would like to restrict some users to not be able to impact performance in PRIMARY replicaset members in my on-premise deployment ( not atlas )
Issue is easy to solve if customer agree adding to the URI
readPreference=secondary
But I am checking if there is option to force from the database side without asking the customer ...
the only option I have found is to restrict by server IP address:
use admin
db.createUser(
{
user: "dbuser",
pwd: "password"
roles: [ { role: "readWrite", db: "reporting" } ],
authenticationRestrictions: [ {
clientSource: ["192.0.2.0"],
serverAddress: ["198.51.100.1","192.51.100.2"]
} ]
}
)
There are currently no supported ways to enforce this from within MongoDB itself apart from the authenticationRestrictions configurations for defining users which is noted in the question itself.
Regarding the comments - ANALYTICS tag in Atlas are a (automatic) Replica Set Tag. Replica set tags themselves can be used in on-premise deployments. But tags are used in conjunction with read preference which is set by the client application (at least in the connection string). So that approach/solution really doesn't provide any additional enforcement from read preference alone for the purposes of this question. Additional information about tags can be found here and here.
In an 'unsupported'/hacky fashion, you could create the user(s) directly and only on the SECONDARY members that you want the client to read from. This would be accomplished by taking the member out of the replica set, starting it up as a standalone, creating the user, and then joining it back to the replica set. While it would probably work, there are a number of implications that don't make this a particularly good approach. For example, elections (for high availability purposes) would change the PRIMARY (therefore where the client can read from) among other things.
Other approaches to this would be in redirecting/restricting traffic at the network layer. Again not a great approach.
I have a standalone instance with Opensearch for testing purposes, I want to keep it light and clean so I'm using ISM to delete indices older than x days.
What I noticed is that by default Opensearch generates a management index (".opensearch-ism-config") with replica "1".
Since I'm using a standalone instance (it is just testing, I'm not worried with redundancy, HA or anything like that) and want to keep my cluster with green status, I have decided that I want those indices to have replica "0".
In order to achieve that, I have created a template in which I set replica "0" for these indices:
{
"order" : 100,
"version" : 1,
"index_patterns" : [".opensearch-ism-*"],
"settings" : {
"index": {
"number_of_shards" : "1",
"number_of_replicas": 0
}
}
}
After a PUT, I start using ISM so that the management ISM index is created after this template is on Opensearch node.
What I observe is that all management indices from ISM are generated with replica "1", therefore ignoring the template.
I can set replica to "0" by updating index settings after creation but this is not the ideal scenario as ISM index rotate and new ones are generated from time to time.
Is there any way to have ISM indices applying replica "0" automatically ?
I need to implement a "winning-configuration-property" algorithm in my application.
For example:
for property: dashboard_material
i would create a file (I am planning to represent each property as a file. This is slightly negotiable)dashboard_material.yml , with the following value
(which format i believe presents a consolidated view of the variants of the property and is more suitable for impact analysis when someone changes values in a particular scenario) :
car_type=luxury&car_subtype=luxury_sedan&special_features=none : leather
car_type=luxury&car_subtype=luxury_sedan&special_features=limited_edition : premium_leather
car_type=economy : pseudo_leather
default : pseudo_leather
I need the closest match. A luxury car can be a sedan or a compact.
I am assuming these are "decorators" of a car in object oriented terms, but not finding any useful implementation for the above problem from sample decorator patterns.
For example an API:
GET /configurations/dashboard_material should return the following values based on input parameters:
car_type=economy : pseudo_leather
car_type=luxury & car_subtype=luxury_sedan : leather
car_type=luxury : pseudo_leather (from default. Error or null if value does not exist)
This looks very similiar to the "specifications" or "queryDSL" problem with GetAPIs - in terms of slicing and dicing based on criteria.
but basically i am looking for a run-time determination of a config value from a single microservice
(which is a spring config client. I use git2consul to push values from git into consul KV. The spring config client is tied into the consul KV. I am open to any equivalent or better alternatives).
I would ideally like to do all the processing as a post-processing after the configuration is read (from either a spring config server or consul KV),
so that no real processing happens after the query is recieved. The same post processing will also have to happen after every spring config client "refresh" interval based
on configuration property updates.
I have previously seen such an implementation (as an ops engineer) with netflix archaius implementation, but not again finding any suitable text on the archaius pages.
My trivial/naive solution would be to create multiple maps/trees to store the data and then consolidate them into a single map based on the API request effectively overriding some of the values from the lower priority maps.
I am looking for any open-source implementations or references and avoid having to create new code.
Is there a way to block arbitrary nodes being reported/discovered/red-status in rundeck? With all the sources feeding in (GCP plugin, resources.xml, etc.) I have often found a job status which applies to "all" is red since the individual instance isn't yet configured, giving a red status to the job.
Would be great if there were a way to do an easy block from the GUI and CLI for all resources for the given node.
You can use custom node-filters rules based on nodes status using health check status (also you can filter by name, tags, ip address, regex, etc). Take a look at this (at "Saving filters" section you've a good example).
Do a .hostnamepattern. in the exclude filter in the job and hit Save.
Simplify-simplify-simplify.
Is it possible to create a single Amazon CloudFormation stack template that instantiates an AWS::EC2::Instance in ap-southeast-1 and another AWS::EC2::Instance in us-west-2 for example?
I suspect not, but I've not yet found a definitive yes/no saying that stacks can't have resources spanning multiple regions.
The accepted answer is out of date. It is now possible to create stacks across accounts and regions using CloudFormation StackSets.
A very good question; but I don't think you would be able to create resources spread across multiple regions.
The end point URL for CloudFormation is region based and AFAIK there isn't a place whether you can specify an region specific (diff region) information.
As of today you can compose the CloudFormation template in such way to make it region independent by leveraging the mappings section and get::region function; but making the template spread across multiple regions simultaneously wouldn't be possible; but can be expected down the line.
Your best bet right now would be to use a Cloudformation Custom Resource that invokes a Lambda function in order to create the resources that are in other regions. When you run the CFN template it would invoke the Lambda function where you'd create code (Python, Node.js or Java) that leverages the AWS SDKs to create the resources you need. CFN Custom Resources allow you to pass parameters to the function and get "outputs" back from them so from a CFN perspective you can treat it just like any other resource.
Here's a walkthrough example from the AWS docs: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html
You can create a lambda function invoking to create a resource in another region, and even making your lambda function to invoke another stack in the other region.
To make your life easy, in this case you can use the lambda cli2cloudformation (https://github.com/lucioveloso/cli2cloudformation).
Using it, you can execute CLI commands inside your lambda, and by this way, you specific the --region in the command.
It's also interesting, because you will be able to set a command when your stack is created, updated and deleted.
"myCustomResource": {
"Type": "Custom::LocationConstraint",
"Properties": {
"ServiceToken": "arn:aws:lambda:eu-west-1:432811670411:function:cli2cfn_proxy2",
"CliCommandCreate": "s3api get-bucket-location --bucket my-test-bucket --region eu-west-1",
"CliCommandUpdate": "",
"CliCommandDelete": ""
}
},