AWS documentation states I can have 200 subnets per VPC without requesting additional capacity. I'm looking to create /28 CIDR subnets that can each provide 16 (or 11 usable) ip addresses and I want to create the maximum number of subnets in the VPC. What CIDR should I assign to the VPC itself? (maybe 10.0.0.0/16?) And what are some example CIDRs to define the subnets?
You could use a /20 network, which would give 4096 addresses.
That would give 256 subnets * 16 addresses (/28) = 4096
But, feel free to use something larger.
You might want to automate the creation of that many subnets.
See: CIDR, Subnet Masks, and Usable IP Addresses Quick Reference Guide (Cheat Sheet)
Related
How aws is able to manage so many vpcs with in a specific aws region considering that there are more vpcs per region than the private IPv4 address ranges can afford?.
I have a subnet mask for my subnet set to 10.0.0.0/9. When setting up kubernetes, google asks for a master ip range for kubernets. I set this to 10.0.0.0/28 but I have no idea if this is correct or how these two things are related? Is there any info on that?
Also, did I do that right? I assume the kubernetes has to be using the ips of the subnet.
thanks,
Dean
"Master IP Range" is only relevant in GKE when you enable Private Network.
When creating a private cluster, the Master IP Range has the following information message:
Master IP range is a private RFC 1918 range for the master's VPC. The master range must not overlap with any subnet in your cluster's VPC. The master and your cluster use VPC peering to communicate privately.
This setting is permanent.
Since 10.0.0.0/28 is a range inside 10.0.0.0/9, it will not effectively isolate the cluster.
I Created a vpc subnet with 10.0.0.0/9 and tried to create the cluster with Master IP Range 10.0.0.0/28, look at the message I get while creating it:
If you look at Creating a Private GKE Cluster you can find many configuration examples for different access types.
Example:
If your subnet is 10.0.0.0/9 you must use a Master IP Range outside of that range.
Since the first half of /9 ends in 10.127.255.255 you can set master network to be anything inside 10.128.0.0/9, 172.16.0.0/12 or 192.168.0.0/16 as long it does not overlaps any other vpc or subnet in your project.
Here you can learn more about GKE Networking.
If you have any doubts let me know in the comments.
Rename an existing Kubernetes/Istio
I am trying to rename an existing Kubernetes/Istio Google regional static Ip address, attached to an Istio ingress to a Global Static ip address?
Confusion points - in connection with the question
Why use regions in static ip addresses?
DNS Zones is about subdomain level.
Resources is located geographically-physical somewhere, so hawing regions for resources make sense, but why do we need to specify a Region for a Static ip address?
Why having "pools" and how to manage them?
How it all fits together:
Static ip address
Loadbalancer
-- DNS Zones
Pools
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
https://cloud.google.com/compute/docs/regions-zones/
I will answer your questions the best way I can down below:
1 and 2 - Why use Regions in Static IP addresses? And Why do we need to specify a Region for a Static IP address?
Answer: As mentioned in the documentation you have provided, Compute Engine resources are hosted in multiple locations worldwide. These locations are composed of regions and zones.
Resources that live in a zone, such as virtual machine instances or zonal persistent disks, are referred to as zonal resources. Other resources, like static external IP addresses, are regional.
Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone.
For example, to attach a zonal persistent disk to an instance, both resources must be in the same zone.
Similarly, if you want to assign a static IP address to an instance, the instance must be in the same region as the static IP address.
The overall underlying is that depending on the region where the IP
has been assigned, this will account for the latency between the
user-end machine and the data center where the IP is being generated
from. By specifying the region, you'll allow yourself to have the best
connection possible and reducing latency.
3 - Why having "pools" and how to manage them?
Answer: Looking at our public documentation on Node pools, we can see that a node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification and that each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool, which has the node pool's name as its value. A node pool can contain only a single node or many nodes.
For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, preemptible VMs, a specific node image, larger instance sizes, or different machine types. Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.
You can learn more about managing node pools by looking into this documentation here.
4 - How does all (Static IP addresses, Load Balancers -- DNS Zones and Pools) fit together?
Answer: As mentioned earlier, all of these things (Static IP addresses, Load Balancers -- DNS Zones and Pools) need to be in the same proximity in order to all work together. However, depending on what regions you connect to by setting up in your Load Balancers, you can have connecting regions as well.
Moreover, I would like to ask you the following questions, just so I can have a better Idea of the situation:
1 - When you say that you are trying to rename an existing Kubernetes/Istio Google regional static Ip address that is attached to an Istio ingress to a Global Static ip address, can you explain in more detail? Are we talking about zones, clusters, etc?
2 - Can you please provide an example on what you are trying to accomplish? Just so that I can have a better idea on what you would like to be done.
I'm creating EKS cluster and VPC via cloudformation. My VPC have four subnets and from that, I am giving two subnets to EKS cluster. But after giving two subnets It is giving error Subnets specified must be in at least two different AZs (Service: AmazonEKS; Status Code: 400; Error Code: InvalidParameterException where I already have given two subnets. When I give three subnets it creates EKS successfully.
I EKS cluster is of 3 node. I tried to create of 2 node also but it not worked.
My VPC info.
Subnet01Block 192.168.0.0/24
Subnet02Block 192.168.64.0/24
Subnet03Block 192.168.128.0/24
Subnet04Block 192.168.192.0/24
VpcBlock 192.168.0.0/16
As per docs, you must select different subnets which belong to different AZs. So you need to update your VPC configuration.
When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. Amazon EKS requires subnets in at least two Availability Zones
When you select subnets for EKS, in the options, next to the subnets you see letters- a,b,c etc. Choose unique letters of the same subnet and you should be good to go.
We are attempting to make several private Kubernetes clusters. We can find limited documentation on specific settings for the private cluster, therefore we are running into issues related to the subnetwork IP ranges.
Say we have 3 clusters: We set the Master Address Range to 172.16.0.0/28, 172.16.0.16/28 and 172.16.0.32/28 respectively.
We leave Network and Subnet set to "default". We are able to create 2 clusters that way, however, upon spin-up of the 3rd cluster, we receive the error of "Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5." We suspect that we are setting up the subnetwork IP ranges incorrectly, but we are not sure what we are doing wrong, or why there is more than 1 secondary range per subnetwork, to begin with.
Here is a screenshot of the configuration for one of the clusters:
We are setting these clusters up through the UI.
This cluster has VPC-native (alias IP) enabled, which use 2 secondary ranges per cluster.
See https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#secondary_ranges
According to
Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5.
the max is 5. That's why the 3rd one failed to create.
The best approach is to create a new subnetwork for each cluster. This way, each subnetwork only requires 2 secondary ranges, and you won't hit the limit of 5.
For anyone who lands here from Google and is wondering how to list / see the subnet names that have been created using GKE as described in OP's question:
To list subnets for a region (and potentially modify or delete a Subnet, since you won't know the name) use the beta gcloud command:
gcloud beta container subnets list-usable
I landed here while looking for the answer and figured others trying to determine the best way to structure their subnets / ranges might be able to use the above command (which took me forever to track down).
(Can't comment on Alan's answer due to low reputation)
You can create a new subnetwork:
go to the "VPC network"
click on "default" (under name)
click on "Add subnet"
define the subnet range / zone
Then on GKE when you create a new cluster, select your new subnetwork.
This should allow you to create more clusters without running into the error Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5.
You can create clusters via gcloud and add --create-subnetwork "".
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--create-subnetwork
This will create a new subnet with each new cluster so the "5 Secondary IP ranges per subnet" quota won't be reached.
I run into the same issue with a error message below
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=This operation will exceed max secondary ranges per subnetwork (30) for subnet "default", consider reusing existing secondary ranges or use a different subnetwork.
I think the problem is we create GEK clusters that shares the same subnet "default" and eventually they exceed the max secondary ranges.
I think the best practice would be to create dedicated subnet for each cluster.
You Creating a cluster and subnet simultaneously
gcloud container clusters create cluster-name \
--region=region \
--enable-ip-alias \
--create-subnetwork name=subnet-name
see below to find more subnet configuration
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#creating_cluster_and_subnet
NOTE: GKE tries to clean up the created subnetwork when the cluster is deleted. But if the subnetwork is being used by other resources, GKE does not delete the subnetwork, and you must manage the life of the subnetwork yourself.