I'm moving DNS control to another AWS account. What to do with the SOA record? - amazon-route53

When I move DNS control for domains I buy from namecheap.com to AWS then I simply copy the NS records provided when I create the hosted zone in Route53 and replace the namecheap.com NS records with these.
I tried the same thing with moving DNS control from one AWS account to another, however this is not working. A dig query shows the new NS records in the target hosted zone, however ping queries fail indicating that the DNS server cannot be found for the specified host. (There is definitely an A name record for the host)
Following the AWS documentation, I replaced the NS records in the old zone with the NS records that Route53 automatically creates in the new zone. These instructions also say "Do not create additional ... start of authority (SOA) records in the Amazon Route 53 hosted zone, and do not delete the existing ... SOA records.". It doesn't say what I should edit them as, but if I understand the SOA record, I believe I should update the SOA record in the original zone to point to the first NS in the new zone. That way the SOA records in each zone would also be identical ...yes?
When I update namecheap.com to transfer DNS control to AWS, I simply set "Custom DNS" then plug in the AWS NS records. I think internally namecheap.com updates their copy of the SOA record automatically because I dont need to do anything with the SOA record and it just works.

Related

how to migrate clustered rds(postgres) to different account?

I am migrating from AccountA(source) to AccountB(Target), same region.
I ran templates so AccountB already has a RDS cluster but with no data. The db instance id is exactly same as what I have on the sourcing account.
**My goal is to have the same endpoint as before since we're retiring AccountA completely and I don't want to change codes side for the updated endpoint. **
I took a snapshot of the cluster(writer instance) then copy snapshot with a KMS key, shared it with AccountB. All good. Now, from the AccountB(target), copied snapshot and attempted to restore. I thought I could restore directly into the RDS cluster but I see that's not doable with restore as it always creates a new instance.
Then, I renamed the existing empty RDS cluster to something else to free up the DB instance ID, then renamed the temp cluster to the same name. It worked but this seems not efficient.
What's is the best practice for the RDS data migration ?
Clustered RDS ( writer - reader ) and Cross Account
I didn't create a snapshot for Reader. Will it be synced from the writer automatically once I restore?
Your experience is correct -- RDS Snapshots are restored as a new RDS instance (rather than loading data into an existing instance).
By "endpoint", if you are referring to the DNS Name used to connect to the database, then the new database will not have the same endpoint. If you want to preserve an endpoint, then you can create a DNS Name with a CNAME record that resolves to the DNS Name of the database. That way, you can change the CNAME record to point to a different endpoint without needing to change the code. (However, you probably haven't done this, so you'll need to change the code to point to the new DNS Name anyway, so it's the same amount of work.)
You are correct that you do not need to snapshot/copy the Readers -- you can simply create them from the new database. You will need to 'create' the Readers on the new database after the restore, since the Snapshot simply contains data for the main database.

Upgrade domain selection for service fabric

If I have say few upgrade domains in service fabric. how does service fabric selects upgrade domains while performing upgrades?
From https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade#rolling-upgrades-overview
Update domains do not receive updates in a particular order.
However, using the start-servicefabricclusterupgrade commandlet, you can specify -SortOrder, which
Defines the order in which an upgrade proceeds through the cluster.
Note that Default
Indicates that the default sort order (as specified in cluster manifest) will be used.
In my experience (mostly on-prem standalone clusters) for Configuration updates, I'm 99% sure it does them in sequential order: UD0, UD1, etc.

Airflow too many DNS lookups for database

We have an Apache Airflow deployed over a K8s cluster in AWS. Airflow is running on containers but the EC2 instances themselves are reserved instances.
We are experiencing an issue where we see that Ariflow is making many DNS queries related to it's DB. When at rest (i.e. no DAGs are running) it's about 10 per second. When running several DAGs it can go up to 50 per second. This results in Route53 blocking us since we are hitting the packet limit for DNS queries (1024 packets per second).
Our DB is a Postgres RDS, and when switching it to a MySQL the issue remains.
The way we understand it, the DNS query starts at K8s coredns service, which tries several permutations of the FQDN and sends the requests to Route53 if it can't resolve it on it's own.
Any ideas, thoughts, or hints to explain Airflow's behavior or how to reduce the number of queries is most welcome.
Best,
After some digging we found we had several issues happening at the same time.
The first being that Airflow's scheduler was running about 2 times per second. Each time it created DB queries which resulted in several DNS queries. Changing that scheduling alleviated some of the issue.
Another issue we had is described here. It looks like coredns is configured to try some alternatives of the given domain if it has less than x number of . in the FQDN. There are 2 suggested fixes in that article. We followed them through and the number of DNS queries dropped.
we have been having this issue too.
wasn't the easiest to find as we had one box with lots of apps on it making 1000s of DNS queries requesting DNS resolution of our SQL server name.
i really wonder why Airflow doesnt just use the DNS cache like every other application

Spring Cloud Netflix: Will Eureka client prefer to choose the remote service in same zone?

The document says:
Eureka clients tries to talk to Eureka Server in the same zone. If there are problems talking with the server or if the server does not exist in the same zone, the clients fail over to the servers in the other zones.
So I know clients will query servers at the same zone first. But my question is will clients prefer to choose the remote service at the same zone? Different zones could be mapped to different server rooms so RPC across to another zone may bring more network latency.
Same zone first, the load balancing is done using Ribbon.
http://cloud.spring.io/spring-cloud-static/spring-cloud.html#_using_ribbon_with_eureka
By default it will be used to locate a server in the same zone as the
client because the default is a ZonePreferenceServerListFilter.

How to setup a MongoDB replica set in EC2 US-WEST with only two availability zones

We are setting up a MongoDB replica set on Amazon EC2 in the us-west-1 region.
This region only has two availability zones though. My understanding is that MongoDB must have a majority to work correctly. If we create 2 servers in zone us-west-1b and one server in us-west-1c this will not provide high availability if the entire us-west-1b goes down right? How is this possible? What is the recommended configuration?
Having faced a similar challenge we looked at a number of possible solutions:
Put an Arbiter in another region:
Secure the connection either by using a point to point VPN between the regions a routing the traffic across this connection.
or
Give each server an E-IP and DNS name and use some combination of AWS security groups, IPTables and SSL to ensure connections are secure.
AWS actually have a whitepaper on this not sure how old it is though http://media.amazonwebservices.com/AWS_NoSQL_MongoDB.pdf
Alternatively you could allow the application to fall back to a read-only state until your servers come back on-line (not the nicest of options though)
Hope this helps