gcp kubernetes autopilot mode, what is the free tier - kubernetes

I'm trying to use the free tier (autopilot mode) to learn k8s on gcp. However I cam across the following Is it possible to have Google Cloud Kubernetes cluster in the free tier?. However when I checked the link given in the question I could not find the specified limitation f1-micro machines are not supported due to insufficient memory. Is this still valid ? can I use k8s on gcp in the free tier without incurring any cost?

There is no way to get a free GKE cluster on GCP, but you can get a very cheap one by following the instructions at
https://github.com/Neutrollized/free-tier-gke.
Using a combination of GKE's free management tier and a low cost machine type, the cost estimate is less than $5 per month: .
More details on what is available as part of the free tier can be found here: https://cloud.google.com/free.
Also, for your question regarding limitation of f1-micro to be used in GKE,if you follow the documentation limitation
It is written that- Minimum CPU platform cannot be used with shared core machine types. Now since f1-micro machines are shared core machine types. So it is valid and cannot be used.

As described in the documentation, there is no management cost for 1 Autopilot or 1 GKE standard zonal mode.
You don't pay for the control plane. But then you have to pay for your workload measured in seconds for autopilot (pod level) and seconds for GKE standard (node level (compute engine))

For study purpose, I believe its better to have classic (Standard) GKE cluster rather than autopilot, where you have less managing options.
When its come to pricing, using Preemptible nodes in GKE cluster is a better option where pricing is really low.
you can enable this selecting as below
Preemptible nodes

Hi If you want to learn the K8s on GCP you can try using qwiklabs where you get some initial credits by which you can access some of the labs where you can learn and practice your activities. https://go.qwiklabs.com/qwiklabs-free

Related

Is there any clould provider where one can run a managed k8s cluster in free tier indefinetively?

I'm trying to run open-source with minimal costs on the cloud and would love to run it on k8s without the hassle of managing it (managed k8s cluster). Is there a free tier option for a small-scale project in any cloud provider?
If there is one, which parameters should I choose to get the free tier?
You can use IBM cloud which provides a single worker node Kubernetes cluster along with container registry like other cloud providers. This is more than enough for a beginner to try the concepts of Kubernetes.
You can also use Tryk8s which provides a playground for trying Kubernetes for free. Play with Kubernetes is a labs site provided by Docker and created by Tutorius. Play with Kubernetes is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.
If you want to use more services and resources, based on your use case you can try other cloud providers, they may not provide an indefinitely free trial but have no restriction on the resources.
For Example, Google Kubernetes engine(GKE) provides $300 credit to fully explore and conduct an assessment of Google Cloud. You won’t be charged until you upgrade which can be used for a 3 month period from the account creation. There is no restriction on the resources and the number of nodes for creating a cluster. You can add Istio and Try Cloud Run (Knative) also.
Refer Free Kubernetes which Lists the free Trials/Credit for Managed Kubernetes Services.

How to setup MetricBalancingThresholds and MetricActivityThresholds on Azure Service Fabric cluster?

I have a Service Fabric cluster with 7 nodes.
I am using Microsoft Azure Service Fabric version 8.2.1571.9590.
The problem is, the cluster is not balanced by CPU nor Memory usage.
It is balanced by the number of PrimaryCount, ReplicaCount and similar metrics, but not by CPU or Memory usage.
The result is, because some of our services are heavy spenders of CPU/RAM ("noisy neighbour" issue), they consume more resources, starving other services in the process.
I know I can set MetricBalancingThresholds and MetricActivityThresholds for our cluster, but don't know metrics names.
I figured based on article: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-balancing
that I can setup MetricBalancingThresholds and MetricActivityThresholds
(https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-fabric-settings)
I know that we can set those values in Azure portal / Service fabric resource / Custom fabric settings.
The problem is, I don't know what parameter names, or metrics names to use to set thresholds on CPU and Memory.
The documentation says: "PropertyGroup", but I don't know what are possible values here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-fabric-settings#metricactivitythresholds
Earlier CpuPercentageNodeCapacity and MemoryPercentageNodeCapacity were used in PlacementAndLoadBalancing section, tho
se were straightforward, but it seems they are deprecated.

How to mange resource hungry Istio default/SDS installation?

I'm using Istio at the moment combined with the cert-manager. Because I need to have multiple certificates I'm using SDS instead of the volume mount approach.
But the hardware requirements for this stuff are really high. For GKE it is recommended to use a node-pool of 4x n1-standard-2 machines. This sums up to 200$ per month just for Istio. The recommendation for EKS is 2x m5.large machines. So it is a little bit cheaper but still around 150$. What confuses me is, that Minikube "just" needs 4vCPUs and 16GB memory in total which is round about the half of the requirements for GKE and EKS.
You'll see the resource hungry components by looking at the istio-system namespace, I mean especially the limits. For me it is:
istio-telemetry > 1100m / 6800m (requested / limits)
istio-policys (I have 5 of them) > 110m / 2000m
My question is:
Did you manage to reduce the limits without facing issues in production?
What node-pool size / machine type are your running your Istio plane?
Did someone tried auto-scaling for this node-pool? Did it reduce the costs?
Kind regards from Berlin.
Managed Istio for GKE is offered by Google as a pre-configured bundle. 4x n1-standard-2 is recommended to provide enough resources for all Istio components being installed.
Downsizing a cluster below the recommended size does not make sense.
Installation of managed Istio onto a standard GKE cluster (3x n1-standard-1)
will fail due to lack of resources. Besides that you wouldn't have
free computing capacity for your workloads. Recommended cluster size
seems reasonable.
Apart from recommended hardware configuration (4x n1-standard-2),
managed Istio can be installed and running on a cluster with configuration
8x n1-standard-1.
Taking into account mentioned in the point ##1, autoscaling could be beneficial
mostly for volatile workloads, but won't help that much for saving resources
allocated for Istio.
If the managed Istio for GKE seemed too resource consuming, you could install original version of Istio and select an installation profile with the components you actually need, as described here:
Customizable Install with Helm

How to check if my kubernetes cluster has resources to deploy all my softwares

I want to deploy many softwares in a kubernetes cluster. I have information like the configuration of each software like the number of pods, request and limits of cpu and RAM for each software.
My requirement is all the softwares should be provisioned successfully or none of them should be provisioned even if one software fails. Failure can be because there are no enough resources in the kubernetes cluster
How do I check if my cluster has sufficient resources to provision all the softwares even before actual deployment of the softwares
AFAIK kubernetes does not support deploying either all or none application.
I think you have to do the maths by yourself.
You said, every information you need is there (Requirements for all the Services).
This should help you planning your clusters dimensions.
Know you should calculate this on node basis. Lets say, you need 16GB Memory. Your nodes bring 8gb per Machine. Your Cluster should provide at least 24GB (3 Nodes) Memory for your application (beside monitoring tools etc.).
Always calculate something on top, because OS and Monitoring-Tools will take a little bit of your nodes resource.

GCP: Kubernetes engine allocatable resources

According to the documentation, Kubernetes reserves a significant amount of resources on the nodes in the cluster in order to run itself. Are the numbers in the documentation correct or is Google trying to sell me bigger nodes?
Aside: Taking kube-system pods and other reserved resources into account, am I right in saying it's better resource-wise to rent one machine equiped with 15GB of RAM instead of two with 7.5GB of RAM each?
Yes, kubernetes reserves a significant amount of resources on the nodes. So better consider that before renting the machine.
You can deploy custom machines in GCP. For the pricing you can use this calculator by Google