How to allow GKE internal Pod to communicate through VPN to an internal IP in another VPC? - kubernetes

I have a GKE cluster (private one) with a NAT that I need to put in networking with a legacy VPC (in another GCP project).
I built a classic VPN between Project B (new) and Project A (old): all VM can talk to each other (nc -vz is my friend).
The GKE cluster inside Project B can talk with internal IP to all VMs on Project B.
I need to have some pods in this GKE able to talk to the private IP on the VPN inside the Project A.
We tried this how to but it's still not working.
If you have an idea that works in my case I will buy you a beer ;) (location : Le Havre, Lille or Paris)
Infra scheme

There is an option in GCP called “Shared VPC”, that in summary, allows the multiple projects’ interconnection within an organization, using a common Virtual Private Cloud. As it is specified in GCP’s documentation, an organization policy applies to all projects in the organization, so you need to follow these steps just once to restrict lien removal Organization policies for Shared VPC. Then you need to follow these steps to provision the Shared VPC:
-Go to the Shared VPC page in the Google Cloud Console.
-Log in as a Shared VPC Admin.
-Select the project you want to enable as a Shared VPC host project from the project picker.
-Click Set up Shared VPC.
-On the next page, click Save & continue under Enable host project.
-Under Select subnets, do one of the following:
a)Click Share all subnets (project-level permissions) if you need to share all current and future subnets in the VPC networks of the host project with service projects and Service Project Admins specified in the next steps.
b)Click Individual subnets (subnet-level permissions) if you need to selectively share subnets from the VPC networks of the host project with service projects and Service Project Admins. Then, select Subnets to share.
-Click Continue.
-The next screen is displayed.
-In Project names, specify the service projects to attach to the host project. Note that attaching service projects does not define any Service Project Admins; that is done in the next step.
-In the Select users by role section, add Service Project Admins. These users will be granted the IAM role of compute.networkUser for the shared subnets. Only Service Project Admins can create resources in the subnets of the Shared VPC host project.
-Click Save.
In the following URLs you are going to find some GCP’s official information such as a Shared VPC Overview Shared VPC overview and all the process in detail to set a new Shared VPC up Setting up Shared VPC.

Related

Elastic cloud multiple users

We would like to move a bitnami vm to the Elastic cloud offering on Azure.
We've created an instance in a resource group - And as the owner I can login and start managing this cluster.
Now I would like to invite other people to this cluster. We don't have the organization option in the Settings (elastic docs: Organizations are currently not supported for Azure Marketplace accounts.).
Therefore I added a user in "Stack Management > Security > Users", but that user can't login, can't reset a password, ...
Am I wrong by adding Users in the Stack Management, can we manage this in Azure AD?
Thanks!

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

GCloud SDK - Shared VPC - Associate only a specific network when attaching a project using gcloud SDK

Is there a way to share a specific subnet of a Shared VPC to a project using the gcloud SDK?
I can use the below command to associate a project but it shares all of the host project's subnets and it doesn't appear there is a flag to specify a specific subnet from the host project to share to the service project.
https://cloud.google.com/sdk/gcloud/reference/compute/shared-vpc/associated-projects/add
In this case you could achieve this as a Shared VPC Admin whom can define an IAM member from a service project as a Service Project Admin with access to only some of the subnets in the host project.
Hope this works for you.

Bluemix: Are devops services available on Bluemix local?

Does the Bluemix local provide devops services like Delivery Pipeline and Active Deploy?
Bluemix Local includes a private syndicated catalog that displays the local services that are available exclusively to you. It also includes additional services that are made available to you to use from Bluemix Public. The syndicated catalog provides the function to create hybrid applications that consist of public and private services.
Bluemix Local comes with all included Bluemix runtimes and a set of services and components available. Take a look at the Table 1. Local Services in Bluemix Local Docs.
As you can see, for example the Auto-Scaling service is already included in the local environment. However you have the option to decide which public services meet the requirements for your business based on your data privacy and security criteria.

Google Storage access based on IP Address

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.