When creating a Service, under VPC subnets, the tooltip says "Choose the subnets in this VPC that the task scheduler should consider for placement. Only private subnets are supported at this time".
However AWS documentation and diagrams show Fargate tasks in public subnets.
Can somebody please explain if I am misunderstanding these seemingly conflicting messages?
Related
I've a small Fargate cluster with a service running and found that if I disable the public IP the container won't build as it doesn't have a route to pull the image.
The ELB for ECS Fargate is part of a subnet which has:
internet gateway configured and attached
route table allowing unrestricted outgoing
security policy on the ECS service allows unrestricted outgoing
DNS enabled
My understanding is the internet gateway is a NAT and the above actions should permit outgoing internet access however I can't make it so. What else is missing?
Just like all other resources in your AWS VPC, if you don't attach a public IP address, then it needs either to be placed in a subnet with a route to a NAT Gateway to access things outside the VPC, or it needs VPC endpoints to access those resources.
I have set-up a EBL for a persistent public & subnet IP. As far as I
can tell my subnet has outgoing internet unrestricted (internet
gateway attached and route opens up all outgoing traffic to 0.0.0.0/0.
I'm unsure if the service setup will configure the EC2 to use this
first then attempt to set-up the container. If not then it probably
doesn't apply.
ELB is for inbound traffic only, it does not provide any sort of outbound networking functionality for your EC2 or Fargate instance. The ELB is not in any way involved when ECS tries to pull a container image.
Having a volatile public IP address is a bit annoying as my
understanding is the security policy will apply to both the
ELB/Elastic provided IP and this one.
What "security policy" are you referring to? I'm not aware of security policies on AWS that are applied directly to IP addresses. Assuming you mean the Security Group when you say "security policy", your understanding is incorrect. Both the EC2 or Fargate instance and the ELB should have different security groups assigned to them. The ELB would have a security group allowing all inbound traffic, if you want it to be public on the Internet. The EC2 or Fargate instance should have a security group only allowing inbound traffic from the ELB (by specifying the ELB's security group ID in the inbound rule).
I want to point out you say "EC2" in your question and never mention Fargate, but you tagged your question with Fargate twice and didn't tag it with EC2. EC2 and Fargate are separate compute services on AWS. You would either be using one or the other. It doesn't really matter in this case given the issue you are encountering, but it helps to be clear in your questions.
I have an ECS-Fargate cluster created inside VPC.
If I want to access above mentioned AWS services from fargate task, what needs to be done?
I see following options from different documentations I read:
Create private link to each AWS service
Create NAT gateway
Not sure which one is correct and recommended option?
To be clear, an ECS cluster is an abstracted entity and does not dictate where you connect the workloads you are running within it. If we stick to the Fargate launch type this means that tasks could be launched either on a private subnet or on a public subnet:
If you launch them in a public subnet (and you assign a public IP to the tasks) then these tasks can reach the public endpoints of the services you mentioned and nothing else (from a networking routing perspective) is required.
If you launch them in a private subnet you have two options that are those you called out in your question.
I don't think there is a golden rule for what's best. The decision is multi-dimensional (cost, ease of setup, features, observability and control, etc). I'd argue the NAT GW route is easier to setup regardless of the number of services you need to add but you may lose a bit of visibility and all your traffic will go outside of the VPC (for some customers this is ok, for others it's not). Private Links will give you tighter control but they may be more work to setup (especially if you need to reach many services).
Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.
I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.
I have a codebuild project that I'm launching inside a VPC. When outside of the VPC, the project runs and logs into Cloudwatch logs. I need to move inside the VPC so that it can access the database. When inside the VPC, the install stage fails and codebuild fails to write anything to Cloudwatch logs. The console page for the build says:
Error: The specified log stream does not exist.
I expect that security groups are the problem, but flow logs are on, and they aren't showing blocked traffic for the codebuild ENI.
There is an internet gateway for the VPC, and the subnet has routes to the internet using the gateway.
The codebuild project is built by cloudformation. Logs are written when the VpcConfig of the codebuild project is commented out, but not when it is included. I believe that demonstrates that IAM permissions are not the problem.
Any suggestions are appreciated.
The Codebuild VPC documentation buries this tidbit at the end of best practices.
When you set up your AWS CodeBuild projects to access your VPC, choose
private subnets only.
By which they mean, codebuild will only work in a private subnet with NAT.
Moving my codebuild into a private subnet from a public subnet fixed my error.