What permissions does codebuild need to write cloudwatch logs when inside a VPC? - amazon-vpc

I have a codebuild project that I'm launching inside a VPC. When outside of the VPC, the project runs and logs into Cloudwatch logs. I need to move inside the VPC so that it can access the database. When inside the VPC, the install stage fails and codebuild fails to write anything to Cloudwatch logs. The console page for the build says:
Error: The specified log stream does not exist.
I expect that security groups are the problem, but flow logs are on, and they aren't showing blocked traffic for the codebuild ENI.
There is an internet gateway for the VPC, and the subnet has routes to the internet using the gateway.
The codebuild project is built by cloudformation. Logs are written when the VpcConfig of the codebuild project is commented out, but not when it is included. I believe that demonstrates that IAM permissions are not the problem.
Any suggestions are appreciated.

The Codebuild VPC documentation buries this tidbit at the end of best practices.
When you set up your AWS CodeBuild projects to access your VPC, choose
private subnets only.
By which they mean, codebuild will only work in a private subnet with NAT.
Moving my codebuild into a private subnet from a public subnet fixed my error.

Related

How to create Azure devops kubernetes service connection for to access private AKS cluster?

Creating a service connection to access non-private AKS cluster is straight forward, however if i want to create service connection for private AKS cluster is it possible from Azure Devops?
You can create New Kubernetes service connection using the KubeConfig option and click the dropdown arrow to choose Save without Verification
Also see Deploying to Private AKS Cluster
Please use below link
https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-azure-devops-to-deploy-an-application-on-aks-private/ba-p/2029630
I have impleted this solution in my place, we had private aks , we where unable to make service connection from azure devops to azure kubeneted,
we created a self hosted linux agent in the subnet where kubenetes is and add used my agent to run build and release pipeline

Cannot create API Management VPC Link in AWS Console

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.
Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Howto route traffic to GKE private master from another VPC network

Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.

How to fix VPC security settings

I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.

Connecting to cluster nodes through google cloud functions

So I've been looking into simplifying some of our project solutions and by the look of it, google cloud functions has the potential to simplify some of our current structure. The main thing I'm curious about is if GCF is able to connect to internal nodes in a Kubernetes cluster hosted in google cloud?
I'm quite the rookie on this so any input is greatly appreciated.
Google Cloud has a beta (as of this writing) feature called Serverless VPC Access that allows you to connect your serverless features (Cloud Functions, App Engine Standard) to the VPC network where your GKE cluster is. This would allow you to access private IPs of your VPC network from Cloud Functions.
You can read the full setup instructions but the basic steps are:
Create a Serverless VPC Access Connector (under the "VPC Network -> Serverless VPC Access" menu in the console)
Grant the cloud function's service account any permissions it will need. Specifically, it will at least need "Project > Viewer" and "Compute Engine > Compute Network User".
Configure the function to use the connector. (In the console, this is done in the advanced settings's "VPC Connector" field).