I want to limit each user on rgw object gateway (radosgw) usage that gets from radosgw-admin usage show --uid=johndoe --start-date=2012-03-01 --end-date=2012-04-01
Any way that I can set limit on these parameters from usage show command?
For example user johndoe can only have 1000ops per month or 1000000bytes put_object per month.
It's okay if there is a solution in nginx or in other layers of the Ceph object gateway stack.
ceph doesn't have any tools for limit request to rgw. you should use reverse proxy module of nginx for two reason :
1- Use caching layer in front of rgw for improve performance
2- Implement limitation in receiving requests to rgw.
You can now use quotas on buckets and users to limit their usage. You can also set default quotas in your ceph config file /etc/ceph/ceph.conf for when new users are created the quotas will be automatically applied. ceph rgw quotas
Related
I have an EKS cluster in AWS where a specific service will work as a live feed to clients, I expect it to use very low resources but it will be using server-sent event which require long lived http connections. Given I will be serving thousands of users I believe the best metric to auto scaling would be open connections.
My understanding is that k8s only have cpu and memory usage as out of the box metrics for scaling, I may be wrong here. I looked into custom metrics but k8s documentation on that is extremely shallow.
Any sugestions or guidance on this is very welcome.
Is there a way to prevent a Pod from deploying onto Kubernetes if it does not have memory resource requests & limits set?
Yes, you can apply Limit Ranges. See e.g. Configure Minimum and Maximum CPU Constraints for a Namespace for an example for CPU resources, but it can be applied for e.g. memory and storage as well.
For this you could enable the Policy addon for AKS:
az aks enable-addons --addons azure-policy --name MyAKSCluster --resource-group MyResourceGroup
This installs a managed Gatekeeper instance to your cluster. With this enabled you can apply Azure build-in policies or apply your own Gatekeeper policies to the AKS cluster. Here is a list of built-in polices from Azure specially for Kubernetes.
Here is the built-in policy to enforce limits. Here you will find a sample ConstraintTemplate for your use case described above. As those templates are CRDs your need to activate those with a Constraint. You may need to tweak them to also enforce memory & cpu requests.
Another Policy tool is Kyverno. The downside is that it is not Azure manage so you have to to update it yourself and you have no built-in polices from Microsoft. Here are some examples policies:
Require Limits and Requests
Memory Requests Equal Limits
Hope that helped in addition to the LimitRange hint from Jonas :)
Need some understanding on sizing consideration of k8s cluster master components, in order to handle maximum 1000 pods how many master will work out and do the job specially in case of multi master mode having load balancer in front to route request to api server.
Will 3 master node(etcd, apiserver, controller, scheduler) enough to handle or require more to process the load.
There is no strict solution for this. As per documentation in Kubernetes v. 1.15 you can create your cluster in many ways, but you must follow below rules:
No more than 5000 nodes
No more than 150000 total pods
No more than 300000 total containers
No more than 100 pods per node
You did not provide any information about infrastructure, if you want to deploy it local or in cloud.
One of advantages of cloud is that cloud kubernetes kube-up automatically configures the proper VM size for your master depending on the number of nodes in your cluster.
You cannot forget about provide proper Quota for CPU, Memory etc.
Please check this documentation for more detailed information.
We are working on provisioning our service using Kubernetes and the service needs to register/unregister some data for scaling purposes. Let's say the service handles long-held transactions so when it starts/scales out, it needs to store the starting and ending transaction ids somewhere. When it scales out further, it will need to find the next transaction id and save it with the ending transaction id that is covered. When it scales in, it needs to delete the transaction ids, etc. ETCD seems to make the cut as it is used (by Kubernetes) to store deployment data and not only that it is close to Kubernetes, it is actually inside and maintained by Kubernetes; thus we'd like to find out if that is open for our use. I'd like to ask the question for both EKS, AKS, and self-installed. Any advice welcome. Thanks.
Do not use the kubernetes etcd directly for an application.
Access to read/write data to the kubernetes etcd store is root access to every node in your cluster. Even if you are well versed in etcd v3's role based security model avoid sharing that specific etcd instance so you don't increase your clusters attack surface.
For EKS and GKE, the etcd cluster is hidden in the provided cluster service so you can't break things. I would assume AKS takes a similar approach unless they expose the instances to you that run the management nodes.
If the data is small and not heavily updated, you might be able to reuse the kubernetes etcd store via the kubernetes API. Create a ConfigMap or a custom resource definition for your data and edit it via the easily securable and namespaced functionality in the kubernetes API.
For most application uses run your own etcd cluster (or whatever service) to keep Kubernetes free to do it's workload scheduling. The coreos etcd operator will let you define and create new etcd clusters easily.
Is there a maximum number of namespaces supported by a Kubernetes cluster? My team is designing a system to run user workloads via K8s and we are considering using one namespace per user to offer logical segmentation in the cluster, but we don't want to hit a ceiling with the number of users who can use our service.
We are using Amazon's EKS managed Kubernetes service and Kubernetes v1.11.
This is quite difficult to answer which has dependency on a lot of factors, Here are some facts which were created on the k8s 1.7 cluster kubernetes-theresholds the Number of namespaces (ns) are 10000 with few assumtions
The are no limits from the code point of view because is just a Go type that gets instantiated as a variable.
In addition to link that #SureshVishnoi posted, the limits will depend on your setup but some of the factors that can contribute to how your namespaces (and resources in a cluster) scale can be:
Physical or VM hardware size where your masters are running
Unfortunately, EKS doesn't provide that yet (it's a managed service after all)
The number of nodes your cluster is handling.
The number of pods in each namespace
The number of overall K8s resources (deployments, secrets, service accounts, etc)
The hardware size of your etcd database.
Storage: how many resources can you persist.
Raw performance: how much memory and CPU you have.
The network connectivity between your master components and etcd store if they are on different nodes.
If they are on the same nodes then you are bound by the server's memory, CPU and storage.
There is no limit on number of namespaces. You can create as many as you want. It doesn't actually consume cluster resources like cpu, memory etc.