GCP: assign/remove ephemeral IP to an existing instance - gcloud

I have few instances in GCP and for administrative purposes I need to briefly connect in SSH and launch few commands. These instances do not have external IP in "normal" mode but for these brief maintenance I would like to assign ephemeral IP, do the maintenance and then remove them.
One can do this easily on the web interface (edit the instance change the NIC config to add ephemeral NAT IP) but I would like to avoid this since I have several instances... Am I missing something in gcloud documentation?

Found it after some (too long) time exploring the deep part of gcloud documentation.
In the section dedicated to assign static external IP address to an instance (yes in the static part) it says in a small note:
"If you intend to use an ephemeral external IP address, you can skip this step, and Compute Engine will randomly assign an ephemeral external IP address."
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#ipassign
So the "key" word is to add an accessConfig to your instance like:
gcloud compute instances add-access-config [INSTANCE_NAME] \
--access-config-name "[ACCESS_CONFIG_NAME]"
In the example, there is a --address [IP_ADDRESS] option to assign the static external IP but, as the note said, it is optional. Frankly, not easy to find!

With Google Cloud SDK you could use a workflow like the following:
Set up some variables;
instance=instance-1
zone=asia-northeast2-a
Set an external ephemeral ipv4 address for the instance, issue the maintainance commands to it, and unset its external ephemeral ipv4 address;
gcloud compute instances add-access-config $instance --zone=$zone
gcloud compute ssh $instance --zone=$zone --command="maintenance #..."
gcloud compute instances delete-access-config $instance --zone=$zone
Correspondent Cloud SDK documentation links are instances/describe, instances/add-access-config, ssh and instances/delete-access-config.

Related

How to bind a static IP for a container group on ACI using Docker Compose

I've a docker compose file intended to run on Azure Container Instance.
Its composed of 3 containers (django, nginx, certbot). I would like to bind a static public IP to the container. I can create it at a runtime but would like to bind the public IP beforehand so I can set the DNS A record for the reverse proxy to negotiate it's SSL certificate. Is there a way to do it by using the Docker compose script or Do I have to use another way like the YML or Resource Manager.
Referencing Docker's documentation for their integration with Azure Container Instances, the scope of the features that are implemented in their integration seems to be quite limited and networking related configurations aren't covered. It seems only the port configuration and DNS labels are available (network related features) as described here: ACI integration Compose features | Docker Documentation
This means that any other features will need to be implemented using other methods such as YAML files, ARM or Bicep templates, etc. (methods that implement the whole ACI API)
It is worth noting that you can't specify a static public IP directly with the other methods either, but you can specify a DNS label and the DNS name will be persisted throughout the ACI's lifecycle (therefore always reachable via the same name) even if the IP changes.
Having a static IP requires leveraging an application gateway, for instance, which can have a static public IP in front of the ACI (which could then be exposed privately on a selected VNet).
Reference documentation for the above:
Static IP address for container group - Azure Container Instances | Microsoft Learn

How to assign a single static source IP address for all pods of a service or deployment in kubernetes?

Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.

Changing Kubernetes cluster IP to internal IP

I have created a Kubernetes cluster in Google Cloud. I have done it a few months ago and configured the cluster to have external IP address limited with authorized networks.
I want to change the cluster IP to internal IP. Is this possible without re-creating the cluster?
As documented here, you currently "cannot convert an existing, non-private cluster to a private cluster."
Having said that, you'll need to create a new private cluster from scratch, which will have both an external IP and an internal IP. However, you'll be able to disable access to the external IP or restrict access to it as per your needs. Have a look here for the different settings available.

External IP of Google Cloud Dataproc cluster changes after cluster restart

There is an option for google cloud dataproc to stop(Not delete) the cluster (Master + Worker nodes) and start as well but when we do so, external IP address of master and worker nodes are changing which causes problem for using Hue and other IP based Web UI on it.
Is there any option to persist the same IP after restart?
Though Dataproc doesn't currently provide a direct option for using static IP addresses, you can use the underlying Compute Engine interfaces to add a static IP address to your master node, possibly removing the previous "ephemeral IP address".
That said, if you're accessing your UIs through external IP addresses, that presumably means you also had to manage your firewall rules to carefully limit the inbound IP ranges. Depending on what UIs you're using, if they're not using HTTPS/SSL then that's still not ideal even if you have firewall rules limiting access from other external sources.
The recommended way to access your Dataproc UIs is through SSH tunnels; you can even add the gcloud compute ssh and browser-launching commands to a shell script for convenience if you don't want to re-type all the SSH flags each time. This approach would also ensure that links work in pages like the YARN ResourceManager, since those will be using GCE internal hostnames which your external IP address would not work for.

Allow access to CloudSQL from all GCE instances

Is it possible to grant blanket access to my CloudSQL instance from ALL (current and future) GCE instances? I've tried adding the /16 internal network block address for my project's instances (copied from the "networks" tab under "Compute Engine": 10.240.0.0/16) but that won't save - it appears that I can only add single-machine (/32) IP addresses.
You need to use the external IP of your machine, although they are both (GCE and Cloud SQL) in Google's datacenters, you cannot communicate between the two using internal IPs.
I do not think there is a native way to allow access from any instance in your project. The only way would be to make your own app to run on one of your instances and use the GCE api to periodically query running instances, get their external ip's, and then use the CloudSQL API to modify the security configuration on the CloudSQL instance.
You could improve this slightly creating a pool of static IP's that you assign to your GCE machines that are going to access your CloudSQL instance, that way the IP's would not change, the side affect is that you would be charged for IP's that you have reserved but do not have allocated to instances.
Apart from that you would have to put a rule to allow any IP access (e.g. 1.0.0.0/0), which would not be a good idea.