Google Cloud Build & GitHub Enterprise IP Allow List - github

How can I allow Google Cloud Build through GitHub Enterprise's IP allow list restriction?
I have already added all 80 IP ranges for the region of the Cloud Build I am using and that did not seem to work.

Related

How to exclude dev and QA site on google G4?

We would like to exclude dev, localhost and QA site on Google G4 which are using the same G4 Id. I was using the universal analytics with excluding host name. However, google G4 is showing to exclude sites with IP addresses. I would like to exclude it with hostname on google G4 as development or QA situs testing on multiple systems and we need to provide many IPs there. With hostname it will solve my issue.
I have tried by creating rules and based on rules I created internal traffic rules. But its showing with IP address to exclude sites. We need to exclude sites with host name just like universal analytics.

Add range of IPs to GitHub whitelist for an AWS resource (amplify)

In order to integrate our GitHub repo with the many AWS resources we are building we need to whitelist the IP addresses those resources use, on GitHub. But for one resource, AWS amplify, the IP address seems to keep changing dynamically. We have tried adding the AWS resource as a “GitHub app” but this still does not work. How can we whitelist an AWS resource onto a repo when the IP dynamically changes?

How to limit access in Cloud Foundry

I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.

Cloudsql access from ai-platform job

Google has nice ways to connect to cloudsql from other google services but I cannot see how to connect from ai-platform jobs. As part of our training job, we need to update our cloudsql db with metrics but the only I could get it to work is by whitelisting all IPs (don't want that!) in the cloudsql and connecting via the public IP. I don't see an option to add cloud-sql-proxy to the trainer instance. Since the IP of the trainer instance is dynamic, we cannot reliably add specific IP address to whitelist. Any other ways to handle this?
It looks like AI Platform supports VPC peering, so you should be able to connect to Cloud SQL using private IP.
Since Cloud SQL also uses VPC peering, you'll likely need to do the following to get the resources to connect:
Create a VPC to share (or use the "default" VPC)
Follow the steps here to setup VPC peering for AI Platform in your VPC.
Follow the steps here to setup a private IP for your instance in your VPC.
Since the resources are technically in different networks, you may need to export custom routes (Step #2) to allow the AI platform access to your Cloud SQL instance.
Alternatively to using private IP, you could keep using public IP w/ an IP allowlist coupled with Authorizing with SSL/TLS certificates. This still isn't as secure as using the proxy or private IP (as users are technically able to connect to your instance), but they'll be unable to interact with the database engine without the correct certificates.
Can you publish a PubSub message from within your training job and have it trigger a cloud function that connects to the database? AI Platform training seems to have IAM restrictions that I too am curious how to control.

Download from cloud storage bucket without internet

I have a requirement to download some files stored in a Google Cloud Storage bucket. The challenge is to download it without internet access. Is possible to interact with a Bucket without Internet access? Any suggestions?
Thanks,
Prasanth
No, it wouldn't be possible. You need internet connection to access resources hosted in the Cloud.
You would need to store the files locally or on a physical data storage device in order to access them without the connection.
The only possible option to not use "internet" is to use Dedicated Interconnect where basically you will have a cable from your on-premise to Google's network.
EDIT:
As I understand from the comment you edited, your actual goal is to connect to your GCS bucket from a private VM instance hosted on GCE.
For that you might want to use VPC Service Controls to define the security perimeter around your services and constrain data within a VPC. One of this product's advantages is that the VPC Service Controls provides an additional layer of security by denying access from unauthorized networks, even if the data is exposed by misconfigured Cloud IAM policies.
Here you can find the GCP documentation on configuring VPC Service Controls.