How to connect Api Gateway to private/internal Elastic Beanstalk? - aws-api-gateway

I am trying to connect Api Gateway to make the request to internal Elastic Beanstalk(on custom VPC, LB facing internal private subnets, instances on private subnets).
I manage to create the VPC and configure Beanstalk app as internal (all is green). I read about the subject and you can connect Api Gateway to VPC using VPCLink. VPCLink is related to an Network Elastic Balancer. But this balancer sees only the Beanstalk EC2 instance which is not ok.
It should target the Beanstalk Load Balancer because Beanstalk has auto-scaling(an can create multiple instances based on your configuration).
Is this possible? and how to do it.
Thank you,
*From Lambda inside VPC is working ok, so one solution is Api Gateway->Lambda->Internal Beanstalk.

Actually is possible by using the IP of the Application Load Balancer(ALB of Beanstalk app) in the NLB(network load balancer) target config. The thing that is needed is to use Lambda to update the NLB (based on some event - CloudWatch). Ip of ALB can change so based on DNS get the new ip(there is on the internet some python script, also is easy to do with node/js).
So in the end you can use VPCLink. But I realise that this is more of an exercise and another approach would be better for this kind of application.

Related

Possible to get static IP address for the Container deployed to Cloud Run?

I would like to deploy a container image to Google Cloud Run (fully managed). I follow the instructions:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
I was wondering if I can fix static IP for the container or not. Please note that I am not using VM instance. I am new to use this service. I really appreciate it if you could help me on this issue.
You can get a static IP for your Cloud Run service (not individual containers, as many containers can be running the same app) by creating a "Cloud HTTP(S) Load Balancer" that serves on a static IP and putting your service behind it.
See the relevant section in documentation on how to create a LB and add a "serverless network endpoint group" behinding it that routes the traffic to Cloud Run.
There's also sample step-by-step guide on this with a load balancer with static IP at https://cloud.google.com/run/docs/multiple-regions.
If you mean "how do I get static IPs for outbound connections my Cloud Run app make", that's a different question with a different answer (it'll be possible soon).
Cloud Run is fully managed serverless containerised service. So you wont get access to IP address. You will get fix URL to the service (hash in the service name is unique to the project-service combination).
This feature is now available for Google Cloud Run services:
https://cloud.google.com/run/docs/configuring/static-outbound-ip

How to expose AWS Fargate ECS containers to internet with Route53 DNS?

I have an ECS Task running on fargate and my service get an public ip address automatically. How do I simply expose fargate task to internet with Route53 dns? I was looking around the internet for a whole day today an I couldn't find an example about this simplest possible scenario where I just want to expose a port from said task to the internet and map a Route53 record to point to its ip address.
If I understood correctly from the minimal information I found is that I would have to create an vpc endpoint but I couldn't find information about how to route the traffic to a task/container.
I am also having this problem. I was able to route traffic to a single container service in public subnets via application load balancers. However, now, I cannot reproduce this. Did you try the ALB yet?

How to access the Kubernetes external IP as HTTPS

I've deployed a Django app on Azure Kubernetes service using a load balancer service. So far accessing the external IP of the load balancer I'm able to access my application but I need to expose the app for HTTPS requests.
I'm new to Kubernetes and unable to find any article which provides these steps. So please help me with the steps/action I need to perform to make this work.
You need to expose your application using ingress.Here is the doc on how to do it in azure kubernetes service.

Cannot create API Management VPC Link in AWS Console

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.
Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Whitelist traffic to mysql from a kubernetes service

I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.