Can an Azure virtual machine scale set have a hidden public IP? - kubernetes

I have an AKS cluster with 2 nodepools (one was added later). My problem is that only the first nodepool (the one created with the cluster) has a public ip which can be found in azure portal (as a public ip resource). Is it possible to find the IP of the second nodepool somewhere in the portal? I know what the IP is because I pinged one of my servers from a pod running on that nodepool, but I need the resource (or at least it's ID). I also tried searching for it using azure resource explorer but I couldn't find anything related to it. Is it hidden?
Sorry if the question seems dumb. I hope I was clear enough.

You are probably dealing with an ephemeral external IP, so whenever there is no public ip attached to a vm it gets assigned an ephemeral one for outgoing comms. You can also read this article to get a better idea how to control that: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

Calling AWS services [s3, DynamoDB, kinsesis] from ECS-fargate task which is created inside a VPC

I have an ECS-Fargate cluster created inside VPC.
If I want to access above mentioned AWS services from fargate task, what needs to be done?
I see following options from different documentations I read:
Create private link to each AWS service
Create NAT gateway
Not sure which one is correct and recommended option?
To be clear, an ECS cluster is an abstracted entity and does not dictate where you connect the workloads you are running within it. If we stick to the Fargate launch type this means that tasks could be launched either on a private subnet or on a public subnet:
If you launch them in a public subnet (and you assign a public IP to the tasks) then these tasks can reach the public endpoints of the services you mentioned and nothing else (from a networking routing perspective) is required.
If you launch them in a private subnet you have two options that are those you called out in your question.
I don't think there is a golden rule for what's best. The decision is multi-dimensional (cost, ease of setup, features, observability and control, etc). I'd argue the NAT GW route is easier to setup regardless of the number of services you need to add but you may lose a bit of visibility and all your traffic will go outside of the VPC (for some customers this is ok, for others it's not). Private Links will give you tighter control but they may be more work to setup (especially if you need to reach many services).

Kubernetes cluster outgoing traffic IP

I have a Kubernetes cluster on Google Kubernetes Engine. I want to assign a static IP for all outgoing traffic of a cluster.
I already have reserved external IPs but I can't assign them to a cluster with the GCP console.
I found a solution to do it with the cli :
Static outgoing IP in Kubernetes
but it targets the VM and I will need to set it each time I deploy. So it's not targeting the cluster.
Can anybody provide any pointers? Thanks.
GKE currently doesn't have an option to create the cluster with all your nodes using a reserved public IP. All you get in advanced networking options is something like this:
You will have to use the gcloud API that you mentioned which should be easy to put in a script.
Or you can also use the UI by editing the instance(s) and going into 'Network Interfaces' like this:
I agree with something in the previous answer you can't do something like this directly in the cluster, but you can use another service to do what you are looking for: nat gateway that will use a fixe public ip.
For more security, you can even deploy the gateways in multiple zones to have some redundancy and your cluster will always have outgoing trafic go by the gateways.
I won't explain how it works here, because google already provided a tutorial to what you want to do here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
Enjoy.

How to run a script when a node goes down on Kubernetes?

I have a question about kepping a Kubernetes cluster online as much as possible. Usually, the cluster would be behind some kind of cloud load-balancer which does health-checks and directs traffic to the available nodes.
Now, my hosting provider does not offer managed load balancers. Instead, they have a configurable so-called "failover" IP which can be detached and re-attached to another server by running a command in the command line. it's not a real failover IP in the traditional sense. More like a movable IP.
As a beginner to Kubernetes, I'm not sure how to go about this.
Basically, I'd need to run a script that checks if the cluster is still publically online on the IP. When it goes down, one of the nodes should run the script to detach and re-attach the failover IP to itself or one of the other nodes.
Extra complication: The moving of the failover IP takes around 40-60 seconds to take effect, so we should not run the script too often.
This also means that only 1 node is attached to the public IP and all traffic to the cluster will come in this way. There is no load balancer distributing traffic among the online nodes. Will Kubernetes send the request on its own to the other nodes internally? I imagine so?
The cluster consists of 3 identical servers with 1 master and 2 other workers. I'd setup load balancing in the cluster with Ingress.
The goal is to keep websites running on k8s up as much as possible, while working with the limited options our hosting company offers. The hosting company only offers dedicated bare-metal machines and this movable IP. They don't have managed load balancers like AWS or DigitalOcean have, so I need to find a solution for that. This moveable IP looks like an answer, but if you know a better way, then sure.
All 3 machines have a public IP and a private IP.
There is 1 extra public IP that can be moved to 1 of the 3 nodes (using this I want to achieve failover, unless you know a better way?).
Personally, I don't think I need a multi-master cluster. As I understand it, the master can go down for short periods of time, and during those periods the cluster is more vulnerable, but this is okay as long as we can timely fix the master. Only thing is, that we need to move this IP over to an online node, and I'm not sure how to trigger this.
Thanks

Can nodes on a Kubernetes cluster have dynamic IP's OIC Classic

I'm deploying K8s to the Oracle Cloud Infrastructure where while I can make sure that the public internet facing IP stays static even when the instances are restarted. But for some reason the private IP of the instances always changes. Which brings me to the question - can Kubernetes work with nodes who's IP changes after restarts?
This could be quite a noob question but I did try to read up online and I couldn't find a conclusive answer.
Yes, kubernetes can handle that case easily, and on OCI it works just fine. The individual worker nodes will (using the kubelet on that host) call to the master IP, which we would recommend using a load balancer to front to achieve a static IP and allow you to change, scale, and otherwise adjust your master kubernetes control plane nodes as you wish, without disrupting the workers.
You can get a pretty slick setup currently with the terraform tooling for kubernetes that is published here:
https://github.com/oracle/terraform-kubernetes-installer