AWS landing zone home region and resource restrictions - aws-control-tower

My current understanding is that if I were to set up a Multi Account Landing Zone ( MALZ) in one region , say for example Ireland, I will still be able to have accounts that can contain resources in other regions ( US , Frankfurt et al ) assuming the guardrails allows .
Is my understanding correct ? I am bit confused when I read this
Single AWS Region. AMS multi-account landing zone is restricted to a single AWS Region. To span multiple AWS Regions, use multiple multi-account landing zone.
https://docs.aws.amazon.com/managedservices/latest/userguide/single-or-multi-malz.html

AWS managed service is a bit of a white-glove service so I'm not familiar how standardised their offering and guard rails are. There's a few different parts that come into play
regions that host your landing zone shared infrastructure, e.g. logging account, control tower, AWS SSO etc.
regions that host shared infrastructure that you deploy into every account managed under the landing zone, e.g. a default VPC (peered to a TGW)
regions that are allowed to be used in managed accounts, e.g. because an SCP on the OU forbids everything else
From my understanding it seems that one AMS multi-account landing zone always operates in a single region for all three of those.
May be a fine restriction for starting out, but my experience with large landing zones (> 500 Accounts) is that you start keeping 1. and 2. locked to a single region, but keep 3. restricted only for governance/compliance reasons (e.g. EU only). That gives teams the freedom to leverage AWS regions the way that makes the most sense to their applications like lambda edge functions, regional s3 buckets etc.
Of course, applications that do need on-premise connectivity have a strong gravity to the region hosting transit gateway. Depending on how your on-prem looks like, larger orgs can later add multiple landing zones or even preferably use a modular landing zone approach with "TGW peerings as a service".

Related

Multi Region Postgres Latency Issue Azure

Architecture which we are using currently is as below
Private Web App Services hosted in US Region and India Region.
Both the apps are behind the respective App Gateway, this app Gateway is behind the front door which helps us serve the request from the nearest app gateway. But Both apps uses the same postgres which is present in US region.
Now our issue is when we hit the api from US response time is less then 2sec whereas when we hit the api from India region it takes 70sec.
How can we reduce the latency ?
Actually, the problem is the APIs does write operation due to which we cannot a read replica.
There a few things you can do
1- Add a cache layer to both regions and rather than querying directly on DB, check if the data is available in the cache first, and if it's not, get it from DB and add to the cache layer.
2- Add a secondary database on India region which will be a read only.
PS: You may have stale data with both approaches so you should sync properly according to your requirements

How do I find out which files were downloaded outside my continent (and by whom)?

I have been monitoring Cloud Storage billing daily and saw two unexpected, large spikes in "Download Worldwide Destinations (excluding Asia & Australia)" this month. The cost for this SKU is typically around US$2-4 daily; however, these two daily spikes have been $89 and $15.
I have enabled GCS Bucket Logging soon after the $89 spike, hoping to deduce what causes it the next time it happens, but when the $15 spike happened yesterday, I was unable to pinpoint which service or files downloaded have caused this spike.
There is a Log field named Location, but it appears to be linked to the region where a bucket is located, not the location of the downloader (that would contribute to the "Worldwide Destinations" egress).
As far as I know, my services are all in the southamerica-east1 region, but it's possible that there is either a legacy service or a misconfigured one that has been responsible for these spikes.
The bucket that did show up outside my region is in the U.S., but I concluded that it is not responsible for the spikes because the files there are under 30 kB and have only been downloaded 8 times according to the logs.
Is there any way to filter the logs so that it tells me as much information as possible to help me track down what is adding up the "Download Worldwide Destinations" cost? Specifically:
which files were downloaded
if it was one of my Google Cloud services, which one it was
Enable usage logs and export the log data to a new bucket.
Google Cloud Usage logs & storage logs
The logs will contain the IP address, you will need to use a geolocation service to map IP addresses to city/country.
Note:
Cloud Audit Logs do not track access to public objects.
Google Cloud Audit Logs restrictions

Migrating a domain I bought from dreamhost to Amazon

I'm in the use case where I had nothing on this domain, nothing was started on either side, I just bought the domain on the wrong service.
I imagine it's possible to transfer ownership to AWS, so that I may start managing the DNS from there rather than from dreamhost.
I probably could have purchased the domain from route 53 in the first place but this is now done and I don't want for the year under dreamhost to time out to start using it. nor do I want to use dreamhost to manage this url since dreamhost charges quite a lot more.
I've found the amazon guide that's my exact situation, but as per ususal with these guides they're super afraid of providing a concrete example and get into super abstracts with reused terminology for different meaning resulting in an unusable jumble of uncertainties : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-inactive.html
So I've gotten to :
Step 3: Create records (inactive domains)
I've just manually edited the values that were created by default by route 53 when I created that hosted zone to the ones I found in the dreamhost DNS configuration :
but I doubt that's what I have to do to transfer the domain especially since the step after that basically sais to change it back to what it was.
So what is it exactly I'm supposed to do in order to transfer the domain to amazon (route 53)?
Domain registration and DNS resolution are related, but separate entities. It seems like you decided you want route53 to serve your DNS entries. Given that, you have two choices.
Choice 1: Keep domain registered with dreamhost
If you do this, you need to instruct dreamhost to look up DNS entries for your domain at route53. This can be accomplished by setting the NS servers on dreamhost to point to route53. There are detailed instructions for this at AWS here. What you have in your step3 is backwards. Step 3 is just saying if you want HOST.yourdomain.com, to add a entry 'HOST' into the hosted zone. You should not touch the NS or SOA entries on the route53 hosted zone away from their original settings. You can simply delete the zone and start over again.
Background: Dreamhost will populate the NS entries by default and they will be the ones queried how to resolve HOST.yourdomain.com. However, if you don't provide dreamhost any information that they should refer the requests to route53, it has no way of knowing that. You need to tell dreamhost that the NS entries (Nameserver) should point to route53's servers. That way, the user trying to resolve HOST.yourdomain.com will be pointed to route53. When it asks route53 what the IP is, all will be well if you setup your hosted zone to resolve that entry. This is what you are going to do on step 4 from the AWS documentation.
Choice 2: Transfer your domain registration to route53
This is a little more up front work, but may be easier in the long run. You are permitted to transfer the domain to another domain registrar. You'll have to follow instructions at both the giving side (dreamhost) and the gaining side (route53).
NOTE: ICANN does enforce a 60 day lock on moves. If you just registered your domain, you will need to wait 60 days before the transfer process can begin. Also, do not worry about 'double paying' for the year. You are required to purchase at least one more year of domain registration, but it will be appended to the end date of your expiration (it won't start it over). Once you move to route53, especially if you already are using route53 for the hosted zone, you will have one less place to pay and administer.
Additional NOTE: Because of the 60 day lock, if it has been less than 60 days since you created the domain, choice #1 is the only choice during that period if you want to serve DNS records from route53.

Multiple pods and nodes management in Kubernetes

I've been digging the Kubernetes documentation to try to figure out what is the recommend approach for this case.
I have a private movie API with the follow microservices (pods).
- summary
- reviews
- popularity
Also I have accounts that can access these services.
How do restrict access to services per account e.g. account A can access all the services but account B can only access summary.
The account A could be doing 100x more requests than account B. It's possible to scale services for specific accounts?
Should I setup the accounts as Nodes?
I feel like I'm missing something basic here.
Any thoughts or animated gifs are very welcome.
It sounds like this is level of control should be implemented at the application level.
Access to particular parts of your application, in this case the services, should probably be controlled via user permissions. Similar line of thought for scaling out the services...allow everything to scale but rate limit up front, e.g., account A can get 10 requests per second and account B can do 100x. Designating accounts to nodes might also be possible, but should be avoided. You don't want to end up micromanaging the orchestration layer :)

Amazon S3 + CloudFront Queries

I am currently making a social sharing like app and I encounter a problem.
First off, S3 in my experience is slow, so I need to sync the data for multiple servers around the world to make it faster for multiple users.
So my question is, I need to create multiple buckets for each country right? Amazon has a list of their server locations. So for each user, I calculate the nearest server than upload there? How?
Next question, in my app people can subscribe to others and check for their updates. So realistically, this would not create a speed difference. If someone in Singapore uploaded a piece of text and has a subscriber in United States, it wouldn't be any quicker for this subscriber because he has to download a piece of text stored all the way in the Singapore.
All of this is making me confused! I personally find S3 very slow, which is why I am using CloudFront.
Any help? Am I misunderstanding the process? Thanks!
Buckets are not per country, they are per region (EU, US, Asia, etc.)
Secondly, you do not have to manage closest URL to your S3 buckets, that's what CloudFront is for, you just get a single URL for each bucket and CloudFront will manage routing the user's request to the closest edge location.
PS: In addition, Amazon replicates data uploaded to your bucket across all edge locations transparently.
Amazon in no way "automatically" replicates your content out to the edge locations. Instead, your content is copied to a single edge location, if (and only) if the content is not there (could be the first pull, could be it's expired) when a user tries to access it from that edge. It is a pull mechanism, not a push. See "Download Distributions for HTTP Delivery" section of http://aws.amazon.com/cloudfront/