How to access kubernets service of one project from pod of another project in GCP - kubernetes

I am facing one scenario where I have to access one Kubernetes service of GCP PROJECT X from a pod running in another GCP Project Y.
I know we can access service from one namespace in another namespace in the same project by using
servicename.namespacename.svc.cluster.local
how can I do if I have to do similar across different GCP projects?

Agree with #cperez08, but adding my 5 cents.
I think you can try Set up clusters with Shared VPC
With Shared VPC, you designate one project as the host project, and
you can attach other projects, called service projects, to the host
project. You create networks, subnets, secondary address ranges,
firewall rules, and other network resources in the host project. Then
you share selected subnets, including secondary ranges, with the
service projects. Components running in a service project can use the
Shared VPC to communicate with components running in the other service
projects.
You can use Shared VPC with both zonal and regional clusters. Clusters
that use Shared VPC cannot use legacy networks and must have Alias IPs
enabled.
You can configure Shared VPC when you create a new cluster. Google
Kubernetes Engine does not support converting existing clusters to the
Shared VPC model.

If I understood well project X and Y are completely different clusters, thus, I am not sure if that's possible, take a look to this https://kubernetes.io/blog/2016/07/cross-cluster-services/ maybe you can have re-architect your services by federating in case High Availability is needed.
On the other hand, you can always access to the resources through a public endpoint/domain if they are not in someway connected.

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

Kubernetes call service exposed with ambasador in cluster cluster-a from a different cluster cluster-b, same prohect but different vpc

I have two Kubernetes clusters cluster-a, cluster-b in Google Cloud GCP.
Can i call a service exposed with ambasador in cluster (cluster-a) from a different cluster (cluster-b) in the same GCP project but different VPC's ?
Right now i can call the service by the ambasador service name (when I do it in the same cluster).
I have read about Internal TCP/UDP Load Balancing, but it only works when cluster-a and cluster-b are in the same VPC network and my clusters are in different VPC's.
There is a different approach to accomplish it ?
VPCs on GCP aren't routed to each other by default, so your requests won't be reaching the remote CIDRs. For that, you want to use VPC Network Peering to make each VPC reachable to each other.
Note that firewall rules still apply for both VPCs, so you have to create them in order to establish full communication.
Finally, this will only allow network communication between your VPCs. If you rule out this as the issue and you're still experiencing lack of connectivity, it might be related to your Ambassador configuration, in which case, I'd recommend posting either information about that or create another question for that specifically.

Howto route traffic to GKE private master from another VPC network

Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.

How to access services in a different Kubernetes cluster

For improved performance and availability we'd like to distribute certain services from out stack across different Kubernetes clusters in different parts of the world (GCP regions).
The majority of our stack will continue to run in one cluster / region but some user facing services will be deployed all over the world.
Some of these services need to access other services in our main cluster.
Q: How can we reliably access services in a different Kubernetes cluster?
Using internal load balancers seems to be out of the question as those are per region only.
We'd like to keep the communication between our services inside the private GCP network and avoid going over the public internet. So an public ingress also wouldn't work.
VPC networks are global resources, not restricted by regional boundaries, and so with the correct firewall rules set up, you should be able to access any internal resource from any other resource "right out of the box", assuming they are in the same VPC network and same project.
Take a look at VPN Peering: https://cloud.google.com/vpc/docs/vpc-peering
It allows you to connect two vpcs (in different regions) so that they can communicate privately.
You may have to recreate/reconfigure your Kubernetes in order to support this vpc architecture.