Is there a way we can share the GPU between multiple pods or we need some specific model of NVIDIA GPUS?
Short answer, yes :)
Long answer below :)
There is no "built-in" solution to achieve that, but you can use many tools (plugins) to control GPU. First look at the Kubernetes official site:
Kubernetes includes experimental support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes.
This page describes how users can consume GPUs across different Kubernetes versions and the current limitations.
Look also about limitations:
GPUs are only supposed to be specified in the limits section, which means:
- You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.
- You can specify GPU in both limits and requests but these two values must be equal.
- You cannot specify GPU requests without specifying limits.
Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.
Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.
As you can see this supports GPUs between several nodes. You can find the guide how to deploy it.
Additionally, if you don't specify this in resource / request limits, the containers from all pods will have full access to the GPU as if they were normal processes. There is no need to do anything in this case.
For more look also at this github topic.
Related
We’re in the process of migrating our aging monolith to a more robust solution and landed on Kubernetes as the most appropriate platform to achieve what we’re looking for. At the same time, we’re looking to split out and isolate our client data for security and improved privacy.
What we’re considering is ultimately having one database per customer, and embedding those connection details into a deployment for each of them. We’d then build a routing service of some kind that would link a client’s request to their respective deployment/service.
Because our individual clients vary wildly in size (we have some that generate thousands of requests per minute, and others that are hundreds per day), we like the option of having the ability to scale them independently through ReplicaSets on the deployments.
However, I have some concerns regarding upper limits of how many deployments can exist/be successfully managed within a cluster, as we’d be looking at potentially hundreds of different clients, which will continue to grow. I also have concerns of costs, and how having dedicated resources (essentially an entire VM) for our smaller clients might impact our budgets.
So my questions are:
is this a good idea at all? Why or why not, and if not, are there alternative architectures we could look at to achieve the same thing?
is this solution more expensive than it needs to be?
I’d appreciate any insights you could offer, thank you!
I can think of a couple options for this situations:
Deploying separate clusters for each customer. This also allows you to size your clusters properly for each customers and configure autoscaling accordingly for each of them. The drawback is that each cluster has a management fee of 0.10$ per hour, but you get full guarantee that everything is isolated, and you can use the cluster autoscaler to make sure that only the VMs that are actually needed for each customer are running. For smaller customers, you may wanna use this with small (and cheap) machine types.
Another option would be to, as mentioned in the comments, use namespaces. However you would have to configure the cluster properly as there exist ways of accessing services in different namespaces.
Implement customer isolation in your own software running on a cluster. This would imply forcing your software to access only the database for a given customer, but I would not recommend to go this route.
I'm running a fairly resource-intensive service on a Kubernetes cluster to support CI activities. Only a single replica is needed, but it uses a lot of resources (16 cpu), and it's only needed during work hours generally (weekdays, 8am-6pm roughly). My cluster runs in a cloud and is setup with instance autoscaling, so if this service is scaled to zero, that instance can be terminated.
The service is third-party code that cannot be modified (well, not easily). It's a fairly typical HTTP service other than that its work is fairly CPU intensive.
What options exist to automatically scale this Deployment down to zero when idle?
I'd rather not setup a schedule to scale it up/down during working hours because occasionally CI activities are performed outside of the normal hours. I'd like the scaling to be dynamic (for example, scale to zero when idle for >30 minutes, or scale to one when an incoming connection arrives).
Actually Kubernetes supports the scaling to zero only by means of an API call, since the Horizontal Pod Autoscaler does support scaling down to 1 replica only.
Anyway there are a few Operator which allow you to overtake that limitation by intercepting the requests coming to your pods or by inspecting some metrics.
You can take a look at Knative or Keda.
They enable your application to be serverless and they do so in different ways.
Knative, by means of Istio intercept the requests and if there's an active pod serving them, it redirects the incoming request to that one, otherwise it trigger a scaling.
By contrast, Keda best fits event-driven architecture, because it is able to inspect predefined metrics, such as lag, queue lenght or custom metrics (collected from Prometheus, for example) and trigger the scaling.
Both support scale to zero in case predefined conditions are met in a equally predefined window.
Hope it helped.
I ended up implementing a custom solution: https://github.com/greenkeytech/zero-pod-autoscaler
Compared to Knative, it's more of a "toy" project, fairly small, and has no dependency on istio. It's been working well for my use case, though I do not recommend others use it without being willing to adopt the code as your own.
There are a few ways this can be achieved, possibly the most "native" way is using Knative with Istio. Kubernetes by default allows you to scale to zero, however you need something that can broker the scale-up events based on an "input event", essentially something that supports an event driven architecture.
You can take a look at the offcial documents here: https://knative.dev/docs/serving/configuring-autoscaling/
The horizontal pod autoscaler currently doesn’t allow setting the minReplicas field to 0, so the autoscaler will never scale down to zero, even if the pods aren’t doing anything. Allowing the number of pods to be scaled down to zero can dramatically increase the utilization of your hardware.
When you run services that get requests only once every few hours or even days, it doesn’t make sense to have them running all the time, eating up resources that could be used by other pods.
But you still want to have those services available immediately when a client request comes in.
This is known as idling and un-idling. It allows pods that provide a certain service to be scaled down to zero. When a new request comes in, the request is blocked until the pod is brought up and then the request is finally forwarded to the pod.
Kubernetes currently doesn’t provide this feature yet, but it will eventually.
based on documentation it does not support minReplicas=0 so far. read this thread :-https://github.com/kubernetes/kubernetes/issues/69687. and to setup HPA properly you can use this formula to setup required pod :-
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
you can also setup HPA based on prometheus metrics follow this link:-
https://itnext.io/horizontal-pod-autoscale-with-custom-metrics-8cb13e9d475
How can I request kubernetes to allocate all of the available huge-pages on a node to my app pod?
My application uses huge-pages. Before deploying to kubernetes, I have configured kubernetes nodes for huge-pages and I know by specifying the limits & requests as mentioned in the k8s docs, my app pod can make use of huge-pages. But this somehow tightly couples the node specs to my app configuration. If my app was to run on different nodes with different amount of huge-pages, I will have to keep overriding these values based on target environment.
resources:
limits:
hugepages-2Mi: 100Mi
But then as per k8s doc, "Huge page requests must equal the limits. This is the default if limits are specified, but requests are not."
Is there a way I can somehow request k8s to allocate all available huge-pages to my app pod OR just keep it dynamic like in case of an unspecified memory or cpu requests/limits?
From the design proposal, it looks like the huge pages resource request is fixed and not designed for scenarios where there can be multiple sizes:
We believe it is rare for applications to attempt to use multiple huge page sizes.
Although you're not trying to use multiple values but dynamically modify them (once the pod is deployed), the sizes must be consistent and are rather used for pod pre-allocation and to determine how to treat reserved resources in the scheduler.
This means that is mostly used to assess resources in a fixed way, expecting a somewhat uniform scenario (where node-level page sizes were previously set).
Looks like you're going to have to rollout different pod specs depending on your nodes settings. For that, maybe some "traditional" tainting in the nodes would help identifying specific resources in an heterogeneous cluster.
Is there a possibility to share a single GPU between kubernetes pods ?
As the official doc says
GPUs are only supposed to be specified in the limits section, which means:
You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.
You can specify GPU in both limits and requests but these two values must be equal.
You cannot specify GPU requests without specifying limits.
Containers (and pods) do not share GPUs. There’s no overcommitting of GPUs.
Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.
Also, you can follow this discussion to get a little bit more information.
Yes, it is possible - at least with Nvidia GPUs.
Just don't specify it in the resource limits/requests. This way containers from all pods will have full access to the GPU as if they were normal processes.
Yes it's possible by making some changes to the scheduler, someone on github kindly open-sourced their solution, take a look here: https://github.com/AliyunContainerService/gpushare-scheduler-extender
Yes, you can use nano gpu for sharing gpu of nvidia.
Official docs says pods can't request fraction of CPU. If you are running machine learning application in multiple pods then you have to look into kubeflow. Those guys have solved this issue.
I have a set of services. Every service contains some components.
Some of them are stateless, some of them are stateful, some are synchronous, some are asynchronous.
I used different approaches to monitoring and alerting.
Log-based alerting and metrics gathering. New Relic based. Own bicycle.
Basically, atm I am looking for a way, how to generalize and aggregate important metrics for all services in single place. One of things, I want is that we monitor more products, than separate services.
As an end result I see it as a single dashboard with small amount of widgets, but looking at those widgets I would be able to say for sure, if services are usable to end-customer.
Probably someone can recommend me some approach/methodology. Or give a reference to some best practices.
I like what you're trying to achieve! A service is not production-ready unless it's thoroughly monitored.
I believe what your're describing goes into the topics of health-checking and metrics.
... I would be able to say for sure, if services are usable to end-customer.
That however will require a little of both ;-) To ensure you're currently fulfilling your SLA, you have to make sure, that your services are all a) running and b) perform as requested. With both problems I suggest to look at the StatsD toolchain. Initially developed by Etsy, it has become the de-facto standard for gathering metrics.
To ensure all your services are running, we're relaying Kubernetes. It takes our description for what should run, be reachable from outside etc. and hosts that on our infrastructure. It also makes sure, that should things die - that they will be restarted. It helps with things like auto-scaling etc. as well! Awesome tooling and kudos to Google!
The way it ensures that is with health-checks. There are multiple ways how you can ensure your service node booted by Kubernetes is alive and kicking (namely HTTP calls and CLI scripts but this should be a modular thing should you need anything else!) If Kubernetes detects unhealthy nodes it will immediately phase them out and start another node instead.
Now, making sure, all your services perform as expected you'll need to gather some metrics. For all of our services (and all individual endpoints), we gather a few metrics via StatsD like:
Requests/sec
number of errors returned (404, etc...)
Response times (Average, Median, Percentiles depending on the services SLA)
Payload size (Average)
sometimes the number of concurrent requests per endpoint, the number of instances currently running
general metrics like the hosts current CPU and memory usage and uptime.
We gather a lot more metrics but that's about the bottom line. Since StatsD has become more of a "protocol specification" than a concrete product there are a myriad of collector, front- and backends to choose from. They help you visualize your systems state and many of them feature alerts of something or some combination of metrics go beyond their thresholds.
Let me know, if this was helpfull!
There's at least 3 types of things you will need to monitor: the host where the service is deployed, the component itself and the SLAs and some of them depend on the software stack you're using as well as the architecture.
With that said, you could for example use Nagios to monitor the hardware where the services are deployed, Splunk for the services metrics/SLAs as well as for any errors that might occur. You can also use SNMP packages in case something goes wrong and you have a more sophisticated support structure, this would be yours triggers. Without knowing how your infrastructure/services are set up it is complicated to go into deeper details.