How to access redshift regional endpoint from within a restricted VPC? - amazon-redshift

I need to access Redshift regional endpoint programmatically to restore a table using boto3 APIs. My code is inside a lambda function bound to a restricted subnet (not NATs).
So for accessing AWS services inside this lambda like Glue and Athena I use VPC endpoints but there is no VPC endpoint for Redshift and I cannot run my boto3 API (getting "Connect timeout on endpoint URL: "https://redshift.us-east-1.amazonaws.com/").
I believe the only way to make this work is to route traffic to a NAT instance/gateway. Is there any other solution?

You are correct that there is no VPC Endpoint available for Amazon Redshift. Any API calls will need to be made via the Internet.
This could be accomplished by installing a NAT Gateway in a public subnet of your VPC.
An alternative approach would be to create an additional AWS Lambda function that is not associated with your VPC. This means that it will have access to the Internet (but not the VPC).
Your existing Lambda function could call the 'external' Lambda function, which would then call Amazon Redshift. However, this would require an API Gateway and a VPC Endpoint for API Gateway because there is no VPC Endpoint for Lambda.
See a similar discussion on Reddit: Access Lambda service from Lambda in a VPC : aws
You could also go via Amazon SNS and a VPC Endpoint for SNS, with SNS then triggering the Lambda function (but you would not receive a 'return signal' when it has completed).

Related

Using HTTP API to call multiple services running on AWS ECS

My goal here is to deploy two spring boot services using AWS ECS Fargate in a private subnet and access them via AWS API Gateway. Basically, I want to use a single HTTP API and then based on the path it should call the appropriate service. I am using VPC Links, and Cloud Map for linking services running in a private subnet, for service discovery. First of all - Is this assumption even correct, i.e. can we use a single HTTP API to call two different services based on a path?
Some considerations of how I created the ECS services.
ECS Service A is deployed in a private subnet, it has no public IP enabled and the service discovery has been enabled. While enabling service discovery I choose the DNS record type to be SRV, giving a port number and TTL as 60 secs.
ECS Service B is also deployed similarly.
Both ECS Service A and B have a separate Service discovery endpoint.
Now in the API Gateway, the steps I followed were
Created a new HTTP API using the defaults, this means the default stage and no routes and integrations configured yet.
Then I created a VPC Link for HTTP API by assigning it a name (service-a-vpclink), assigning a VPC, subnet and appropriate security group (security that was assigned to the ECS service for service A).
Now I created a route where the method is "ANY" and the path is "$default" and assigned an integration to it, I am able to reach all my endpoints of service A running in the private subnet. (So all good here, as this shows that I am able to reach the service running in a private subnet using API Gateway.)
For the integration that I mentioned in point 3, this was of type "Private Resource", target service as "Cloud Map" and then selecting the namespace and appropriate service (serviceA) along with the VPC link that was created in step 2.
But this is what I don't want to do. I want something like the below:
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceA/any-serviceA-endpoints" where /serviceA is a path that is configured in API Gateway and then any-serviceA-endpoints are the actual endpoints configured in the backend service running, navigates to service A endpoints.
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceB/any-serviceB-endpoints" where /serviceB is a path that is configured in API Gateway and then any-serviceB-endpoints are the actual endpoints configured in the backend service running, navigates to service B endpoints.
Here I attach separate integrations to path /serviceA and to path /serviceB, but this does not work. Rather this way the response is 404, not found.
What exactly am I not following?
Many thanks..
Screenshot of route

Google cloud SQL access from multiple VPC

I'm trying to create GCP postgreSQL instance and make it accessible from multiple VPC networks with in one project.
We have VMs in 4 GCP regions. Each region has it's own VPC network and all are peered. But when I create SQL instance I can map its private IP only to one VPC, other don't have access to it.
Is it any steps to follow which will allow to access from multiple VPCs to one SQL instance?
When you configure a Cloud SQL instance to use private IP, you use private services access. Private services access is implemented as a VPC peering connection between your VPC network and the Google services VPC network where your Cloud SQL instance resides.
That said, currently your approach is not possible. VPC network peering has some restrictions, one of which is that only directly peered networks can communicate with each other- transitive peering is not supported.
As Cloud SQL resources are themselves accessed from ‘VPC A’ via a VPC network peering, other VPC networks attached to ‘VPC A’ via VPC network peering cannot access these Cloud SQL resources as this would run afoul of the aforementioned restriction.
On this note, there’s already a feature request for multiple VPC peerings with Cloud SQL VPC.
As a workaround, you could create a proxy VM instance using Cloud SQL proxy. See 1 and 2. For example, the proxy VM instance could be placed in the VPC to which your Cloud SQL instances are attached (VPC A, for example) and it would act as the Cloud SQL Proxy. VM instances in other VPCs connected to VPC A via VPC network peering could forward their SQL requests to the Cloud SQL Proxy VM instance in VPC A, which would then forward the requests to the SQL instance(s) and vice versa.

Cannot create API Management VPC Link in AWS Console

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.
Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Will PrivateLink allow firehose to access my private Redshift cluster?

I am trying to set up firehose to send data from a kinesis stream to a redshift cluster. Firehose successfully inserts the data to my s3 bucket, but I am receiving the following error when firehose attempts to execute the s3->Redshift copy command:
The connection to the specified Amazon Redshift cluster failed. Ensure that security settings allow Firehose connections, that the cluster or database specified in the Amazon Redshift destination configuration JDBC URL is correct, and that the cluster is available.
I have performed every setup step according to this except for one: I did not make my Redshift cluster publicly accessible. I am unable to do this bc the cluster is in a private VPC that does not have an internet gateway attached.
After researching the issue, I found this article which provides insight for how to set up an AWS PrivateLink with firehose. However, I have heard that some AWS services support PrivateLink and others do not. Would PrivateLink work for this case?
I am also concerned with how this would affect the security of my VPC. Could anyone provide insight to possible risks to using a PrivateLink?
I was able to solve this issue. Add an Internet gateway to your VPC route table.
Go to Redshift VPC.
On the Routes tab (you must have 3 private routes), choose Edit, Add another route, and add the following routes as necessary. Choose Save when you're done.
For IPv4 traffic, specify 0.0.0.0/0 in the Destination box, and select the internet gateway ID in the Target list.
If you add internet gateway ID to all 3 private routes, you might see Failure in other applications that are using the same route/VPC. To fix that, update only 1 route with internet gateway ID and the other two will have nat as destination for (0.0.0.0/0).

Connecting to cluster nodes through google cloud functions

So I've been looking into simplifying some of our project solutions and by the look of it, google cloud functions has the potential to simplify some of our current structure. The main thing I'm curious about is if GCF is able to connect to internal nodes in a Kubernetes cluster hosted in google cloud?
I'm quite the rookie on this so any input is greatly appreciated.
Google Cloud has a beta (as of this writing) feature called Serverless VPC Access that allows you to connect your serverless features (Cloud Functions, App Engine Standard) to the VPC network where your GKE cluster is. This would allow you to access private IPs of your VPC network from Cloud Functions.
You can read the full setup instructions but the basic steps are:
Create a Serverless VPC Access Connector (under the "VPC Network -> Serverless VPC Access" menu in the console)
Grant the cloud function's service account any permissions it will need. Specifically, it will at least need "Project > Viewer" and "Compute Engine > Compute Network User".
Configure the function to use the connector. (In the console, this is done in the advanced settings's "VPC Connector" field).