Target group unhealthy for NLB - aws-api-gateway

I am trying to connect Network Load Balancer with API Gateway. This is the architecture.
What has been done so far,
Created VPC with a private and public Subnet and security group with inbound traffic for both Http and Https
Created a new VPC endpoint (AWS services) and selected api-execute from services. Selected VPC, Subnet (public) and Security Group. Fill access policy has been assigned.
Created a new Target Group (IP addresses). Protocol HTTP and Port 80, With above created VPC. In the Targets added the IP of the Subnet which was assigned to the VPC endpoint above
Created a Network Load Balancer (Internet facing, IPV4, VPC and Public Subnet created above. Added a Listener on TCP 80 and assigned the Target Group created above.
Created an API Gateway with type REST API Private. Assigned the VPC Endpoint ID which was created above. Attached a resource policy to the REST API mentioned below,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-west-2::/"
},
{
"Effect": "Deny",
"Principal": "",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-west-2::/",
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "{vpce id}"
}
}
}
]
}
After completing all the steps, the target group health status is unhealthy and when I try to access the Network Load Balancer using the static IP assigned to it, it give this error,
Failed to connect to **.***.***.*** port 80 after 21040 ms: Timed out
What I am missing here?

Related

Can only a private subnet access services via VPC Endpoint?

Will only a private subnet be able to access the AWS VPC Endpoint?
I followed some of the tutorials across web, where everybody were using a private subnet to establish a connection to other services via VPC Endpoint. Can't a public subnet make private connection through VPC Endpoint?
Similarly, is it required that all the subnets be private at the client side (VPC Endpoint) in order to establish a private link (VPC Endpoint Services) ?
Access to your VPC Endpoints is provided through adding specific route in route table.
For example you have private and public subnets. They have different associated route tables and for route table which is associated to private subnet you have route for your VPC Endpoint.
So you can add route to your VPC Endpoint to route table which is associated with public subnet (Or you can use the one route table for public and privates subnets).

How to to launch ECS Fargate container without public IP?

I have an ECS Fargate container app that serves the API request over the public internet.
My understanding is that this API service container can be deployed on the public subnet and that is configured with ALB DNS and target group. As we can see target group redirects the traffic to private IP of the ECS task, I guess we don't need public IP to be enabled when launching the task. However when I attempt this on ECS task launch getting an error "Resourceinitializationerror: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.eu-west-2.amazonaws.com/": dial tcp 52.94.53.88:443: i/o timeout"
If this is not workable and we need to enable public ip on the task launch, I'd prefer to restrict the public IP port access only to web service ALB for best security practice. Could someone suggest me a workable approach on this use-case pls? Thanks.
"I'd prefer to restrict the public IP port access only to web service ALB for best security practice."
Have you tried doing that? It should work fine. Since security groups are stateful, as long as the outbound rules are open, you should be able to lock down the inbound rules on the security group.
If you want remove the public IP completely, then you will need to either deploy Fargate task to a private subnet, with a route to a NAT Gateway, or add VPC endpoints to your VPC for the AWS services that the task needs to access, like ECR.

AWS API Gateway responds with 403 when first going through Alert Logic WAF

I've seen a lot of questions on this topic, but none had answers that worked for my particular situation.
Context
I have a domain name foo.bar.com mapped in Route 53 to an Application Load Balancer in a VPC
The ALB routes to the WAF in my Alert Logic instance, hosted in the same VPC
I have a "website" in Alert Logic that points to xyz.execute-api.us-east-1.amazonaws.com via HTTPS over port 443
I have an API defined in API Gateway with an Invoke URL the same as above xyz.execute-api.us-east-1.amazonaws.com
My API has a route /hello with an Integration that points to an internal Application Load Balancer in the same VPC and subnets as everything mentioned above
Problem
Doing a GET request to https://xyz.execute-api.us-east-1.amazonaws.com succeeds from Postman while connected to the VPN for the given VPC
Doing a GET request to foo.bar.com failed from Postman - whether or not connected to the VPN - with a status code of 403, a body of { "message": "Forbidden" }, and a x-amzn-ErrorTypeofForbiddenException`
QUESTION: What am I missing?

IAP connector not routing request to on-prem. "No healthy upstream"

I'm trying to setup Identity Aware Proxy for my backend services parts of which resides in GCP and other on on-prem,according to the instruction given in the following link
Enabling IAP for on-premises apps and
Overview of IAP for on-premises apps
After, following the guide I ended up in a partial state where services running on GCP serving at https endpoint is perfectly accessible via IAP. However, the app which is running on on-prem is not reachable through pods* and external loadbalancer*.
Current Architecture followed:
Steps Followed
On GCP project
Created a VPC network in any region with one subnet in my case (asia-southeast1)
Used IAP connector https://github.com/GoogleCloudPlatform/iap-connector
Configured the mapping for 2 domains.
For app in GCP
source: gcp.domain.com
destination: app1.domain.com (serving at https endpoint)
For app in on-prem(Another GCP project)
source: onprem.domain.com
destination: app2.domain.com (serving at https endpoint but not exposed to internet)
Configured VPN Tunnel between both the project so the network gets peered
Enabled IAP for the loadbalancer which is created by the deployment.
Added corresponding accounts to allow access to the services with IAP web-user role.
On-prem
Created VPC network in a region with one subnet (asia-southeast1)
Created VM on VPC in that region
Assigned that VM to an instance group
Created Internal Https loadbalancer and chose instance group as backend
Secured load balancer http with ssl
Setup VPN tunnel to the first project
What I have tried?
logged in to pods and pinged different pods. All pods were reachable.
logged in to nodes and pinged the remote VM on port 80 and 443 both are reachable.
pinged remote VM from inside the pods. Not reachable.
Expected Behaviour:
User requests to loadbalancer on the app1.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp.
User requests to loadbalancer on the app2.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp running on on-prem.
Actual Behaviour
Request to the app1.domain.com prompts OAuth screen after authenticating the website is returned to the user.
Request to the app2.domain.com prompts OAuth screen after authenticating the browser returns 503 - "No healthy upstream"
Note:
I am using a separate GCP project to simulate on-premise.
Both projects are peered via VPN tunnel.
Both peering projects have subnets in the same region.
I have used internal https loadbalancer in my on-prem project to make my VM visible in my host project so that the external loadbalancer can route request to the VM's https endpoint.
** I'm suspecting that if pod could able to reach the remote VM the problem might as well be resolved. It's just a wild guess.
Thank you so much guys. I'm looking forward for your responses.

scenarios for AzureKeyVault as servicetag in Inbound NSG Rule

I am new to Networking and have some questions regarding some of the service tags in Azure NSG.
If you see below, Azure has multiple options for service tags while defining inbound NSG rules. But I failed to understand the scenarios for AzureKeyVault, Storge, Cosmos DB etc. in which scenarios these services initiate the request? Why do we need these service tags in the inbound NSG.
But I failed to understand the scenarios for AzureKeyVault, Storage,
Cosmos DB etc. in which scenarios these services initiate the request?
Why do we need these service tags in the inbound NSG.
It's not so good understanding for service tags in the inbound NSG as outbound NSG. For example, If you want to deny all outbound internet traffic and allow only traffic to specific Azure services such as AzurekeyVault or AzureCosmosDB. You can do so using service tags as the destination in your NSG outbound rules.
Similarly, If you want to allow or deny traffic from Azure service in a virtual network, Ip address or Application security group. You can do so using service tags as the source in your NSG inbound rules. For example, you can set the service tag AppService and specific IP addresses(some specific VM IP address) as the destination, then you could restrict the AppService to access the resources in your VM like API or database.
For more details, you can view scenarios for securing your Azure service.