Maximum number of vpc per aws region across user accounts - amazon-vpc

How aws is able to manage so many vpcs with in a specific aws region considering that there are more vpcs per region than the private IPv4 address ranges can afford?.

Related

Rename existing k8s static ip address - Static vs Region when creating Static ip

Rename an existing Kubernetes/Istio
I am trying to rename an existing Kubernetes/Istio Google regional static Ip address, attached to an Istio ingress to a Global Static ip address?
Confusion points - in connection with the question
Why use regions in static ip addresses?
DNS Zones is about subdomain level.
Resources is located geographically-physical somewhere, so hawing regions for resources make sense, but why do we need to specify a Region for a Static ip address?
Why having "pools" and how to manage them?
How it all fits together:
Static ip address
Loadbalancer
-- DNS Zones
Pools
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
https://cloud.google.com/compute/docs/regions-zones/
I will answer your questions the best way I can down below:
1 and 2 - Why use Regions in Static IP addresses? And Why do we need to specify a Region for a Static IP address?
Answer: As mentioned in the documentation you have provided, Compute Engine resources are hosted in multiple locations worldwide. These locations are composed of regions and zones.
Resources that live in a zone, such as virtual machine instances or zonal persistent disks, are referred to as zonal resources. Other resources, like static external IP addresses, are regional.
Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone.
For example, to attach a zonal persistent disk to an instance, both resources must be in the same zone.
Similarly, if you want to assign a static IP address to an instance, the instance must be in the same region as the static IP address.
The overall underlying is that depending on the region where the IP
has been assigned, this will account for the latency between the
user-end machine and the data center where the IP is being generated
from. By specifying the region, you'll allow yourself to have the best
connection possible and reducing latency.
3 - Why having "pools" and how to manage them?
Answer: Looking at our public documentation on Node pools, we can see that a node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification and that each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool, which has the node pool's name as its value. A node pool can contain only a single node or many nodes.
For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, preemptible VMs, a specific node image, larger instance sizes, or different machine types. Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.
You can learn more about managing node pools by looking into this documentation here.
4 - How does all (Static IP addresses, Load Balancers -- DNS Zones and Pools) fit together?
Answer: As mentioned earlier, all of these things (Static IP addresses, Load Balancers -- DNS Zones and Pools) need to be in the same proximity in order to all work together. However, depending on what regions you connect to by setting up in your Load Balancers, you can have connecting regions as well.
Moreover, I would like to ask you the following questions, just so I can have a better Idea of the situation:
1 - When you say that you are trying to rename an existing Kubernetes/Istio Google regional static Ip address that is attached to an Istio ingress to a Global Static ip address, can you explain in more detail? Are we talking about zones, clusters, etc?
2 - Can you please provide an example on what you are trying to accomplish? Just so that I can have a better idea on what you would like to be done.

AWS EKS and VPC cloudformation

I'm creating EKS cluster and VPC via cloudformation. My VPC have four subnets and from that, I am giving two subnets to EKS cluster. But after giving two subnets It is giving error Subnets specified must be in at least two different AZs (Service: AmazonEKS; Status Code: 400; Error Code: InvalidParameterException where I already have given two subnets. When I give three subnets it creates EKS successfully.
I EKS cluster is of 3 node. I tried to create of 2 node also but it not worked.
My VPC info.
Subnet01Block 192.168.0.0/24
Subnet02Block 192.168.64.0/24
Subnet03Block 192.168.128.0/24
Subnet04Block 192.168.192.0/24
VpcBlock 192.168.0.0/16
As per docs, you must select different subnets which belong to different AZs. So you need to update your VPC configuration.
When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. Amazon EKS requires subnets in at least two Availability Zones
When you select subnets for EKS, in the options, next to the subnets you see letters- a,b,c etc. Choose unique letters of the same subnet and you should be good to go.

Understanding --master-ipv4-cidr when provisioning private GKE clusters

I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.
Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
gcloud beta container clusters create pr-clust-1 \
--private-cluster \
--master-ipv4-cidr 172.16.0.16/28 \
--enable-ip-alias \
--create-subnetwork ""
When I run this command, I see that:
I now have a few gke subnets in my VPC belong to the cluster subnets for nodes and services. These are in the 10.x.x.x/8 range.
I don't have any subnets in the 172.16/16 address space.
I do have some new pairing rules and routes that seem to be related. For example, there is a new route peering-route-a08d11779e9a3276 with a destination address range of 172.16.0.16/28 and next hop gke-62d565a060f347e0fba7-3094-3230-peer. This peering role then points to gke-62d565a060f347e0fba7-3094-bb01-net
gcloud compute networks subnets list | grep us-west1
#=>
default us-west1 default 10.138.0.0/20
gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22
gcloud compute networks peerings list
#=>
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.
Is gke-62d565a060f347e0fba7-3094-bb01-net a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16 range) that Google is managing for the GKE service?
Further - how are my requests making it to the Kubernetes API server?
The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:
The --enable-ip-alias flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.
The --create-subnetwork flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr and --services-ipv4-cidr. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork, --cluster-secondary-range-name, and --services-secondary-range-name.
The --private-cluster flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr (172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint field in the output of gcloud beta container clusters describe. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.
Your private cluster also has an external IP address, which you can find as the endpoint field in the output of gcloud beta container clusters describe. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl.
You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud CLI.
Hope this helps!

Google Kubernetes private cluster: Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork

We are attempting to make several private Kubernetes clusters. We can find limited documentation on specific settings for the private cluster, therefore we are running into issues related to the subnetwork IP ranges.
Say we have 3 clusters: We set the Master Address Range to 172.16.0.0/28, 172.16.0.16/28 and 172.16.0.32/28 respectively.
We leave Network and Subnet set to "default". We are able to create 2 clusters that way, however, upon spin-up of the 3rd cluster, we receive the error of "Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5." We suspect that we are setting up the subnetwork IP ranges incorrectly, but we are not sure what we are doing wrong, or why there is more than 1 secondary range per subnetwork, to begin with.
Here is a screenshot of the configuration for one of the clusters:
We are setting these clusters up through the UI.
This cluster has VPC-native (alias IP) enabled, which use 2 secondary ranges per cluster.
See https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#secondary_ranges
According to
Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5.
the max is 5. That's why the 3rd one failed to create.
The best approach is to create a new subnetwork for each cluster. This way, each subnetwork only requires 2 secondary ranges, and you won't hit the limit of 5.
For anyone who lands here from Google and is wondering how to list / see the subnet names that have been created using GKE as described in OP's question:
To list subnets for a region (and potentially modify or delete a Subnet, since you won't know the name) use the beta gcloud command:
gcloud beta container subnets list-usable
I landed here while looking for the answer and figured others trying to determine the best way to structure their subnets / ranges might be able to use the above command (which took me forever to track down).
(Can't comment on Alan's answer due to low reputation)
You can create a new subnetwork:
go to the "VPC network"
click on "default" (under name)
click on "Add subnet"
define the subnet range / zone
Then on GKE when you create a new cluster, select your new subnetwork.
This should allow you to create more clusters without running into the error Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5.
You can create clusters via gcloud and add --create-subnetwork "".
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--create-subnetwork
This will create a new subnet with each new cluster so the "5 Secondary IP ranges per subnet" quota won't be reached.
I run into the same issue with a error message below
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=This operation will exceed max secondary ranges per subnetwork (30) for subnet "default", consider reusing existing secondary ranges or use a different subnetwork.
I think the problem is we create GEK clusters that shares the same subnet "default" and eventually they exceed the max secondary ranges.
I think the best practice would be to create dedicated subnet for each cluster.
You Creating a cluster and subnet simultaneously
gcloud container clusters create cluster-name \
--region=region \
--enable-ip-alias \
--create-subnetwork name=subnet-name
see below to find more subnet configuration
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#creating_cluster_and_subnet
NOTE: GKE tries to clean up the created subnetwork when the cluster is deleted. But if the subnetwork is being used by other resources, GKE does not delete the subnetwork, and you must manage the life of the subnetwork yourself.

Max /28 subnets in a 10.0.0.0/16 VPC

AWS documentation states I can have 200 subnets per VPC without requesting additional capacity. I'm looking to create /28 CIDR subnets that can each provide 16 (or 11 usable) ip addresses and I want to create the maximum number of subnets in the VPC. What CIDR should I assign to the VPC itself? (maybe 10.0.0.0/16?) And what are some example CIDRs to define the subnets?
You could use a /20 network, which would give 4096 addresses.
That would give 256 subnets * 16 addresses (/28) = 4096
But, feel free to use something larger.
You might want to automate the creation of that many subnets.
See: CIDR, Subnet Masks, and Usable IP Addresses Quick Reference Guide (Cheat Sheet)