Invoking CloudRun endpoint from within itself - kubernetes

Assuming there is a Flask web server that has two routes, deployed as a CloudRun service over GKE.
#app.route('/cpu_intensive', methods=['POST'], endpoint='cpu_intensive')
def cpu_intensive():
#TODO: some actions, cpu intensive
#app.route('/batch_request', methods=['POST'], endpoint='batch_request')
def batch_request():
#TODO: invoke cpu_intensive
A "batch_request" is a batch of many same structured requests - each one is highly CPU intensive and handled by the function "cpu_intensive". No reasonable machine can handle a large batch and thus it needs to be paralleled across multiple replicas.
The deployment is configured that every instance can handle only 1 request at a time, so when multiple requests arrive CloudRun will replicate the instance.
I would like to have a service with these two endpoints, one to accept "batch_requests" and only break them down to smaller requests and another endpoint to actually handle a single "cpu_intensive" request. What is the best way for "batch_request" break down the batch to smaller requests and invoke "cpu_intensive" so that CloudRun will scale the number of instances?
make http request to localhost - doesn't work since the load balancer is not aware of these calls.
keep the deployment URL in a conf file and make a network call to it?
Other suggestions?

With more detail, it's now clearer!!
You have 2 responsibilities
One to split -> Many request can be handle in parallel, no compute intensive
One to process -> Each request must be processed on a dedicated instance because of compute intensive process.
If your split performs internal calls (with localhost for example) you will be only on the same instance, and you will parallelize nothing (just multi thread the same request on the same instance)
So, for this, you need 2 services:
one to split, and it can accept several concurrent request
The second to process, and this time you need to set the concurrency param to 1 to be sure to accept only one request in the same time.
To improve your design, and if the batch processing can be asynchronous (I mean, the split process don't need to know when the batch process is over), you can add PubSub or Cloud Task in the middle to decouple the 2 parts.
And if the processing requires more than 4 CPUs 4Gb of memory, or takes more than 1 hour, use Cloud Run on GKE and not Cloud Run managed.
Last word: Now, if you don't use PubSub, the best way is to set the Batch Process URL in Env Var of your Split Service to know it.

I believe for this use case it's much better to use GKE rather than Cloud Run. You can create two kubernetes deployements one for the batch_request app and one for the cpu_intensive app. the second one will be used as worker for the batch_request app and will scale on demand when there are more requests to the batch_request app. I believe this is called master-worker architecture in which you separate your app front from intensive work or batch jobs.

Related

Limiting the number of times an endpoint of Kubernetes pod can be accessed?

I have a machine learning model inside a docker image. I pushed the docker image to google container registry and then deploy it inside a Kubernetes pod. There is a fastapi application that runs on Port 8000 and this Fastapi endpoint is public
(call it mymodel:8000).
The structure of fastapi is :
app.get("/homepage")
asynd def get_homepage()
app.get("/model):
aysnc def get_modelpage()
app.post("/model"):
async def get_results(query: Form(...))
User can put query and submit them and get results from the machine learning model running inside the docker. I want to limit the number of times a query can be made by all the users combined. So if the query limit is 100, all the users combined can make only 100 queries in total.
I thought of a way to do this:
Store a database that stores the number of times GET and POST method has been called. As soon as the total number of times POST has been called crosses the limit, stop accepting any more queries.
Is there an alternative way of doing this using Kubernetes limits? Such as I can define a limit_api_calls such that the total number of times mymodel:8000 is accessed is at max equal to limit_api_calls.
I looked at the documentation and I could only find setting limits for CPUs, Memory and rateLimits.
There are several approaches that could satisfy your needs.
Custom implementation: As you mentioned, keep in a persistence layer the number of API calls received and deny requests after it has been reached.
Use a service mesh: Istio (for instance) will let you limit the number of requests received and act as a circuit breaker.
Use an external Api Manager: Apigee will also let you limit and even charge your users, however if it is only for internal use (not pay per use) I definitely won't recommend it.
The tricky part is what you want to happen after the limit has been reached, if it is just a pod you may exit the application to finish and clear it.
Otherwise, if you have a deployment with its replica set and several resources associated with it (like configmaps), you probably want to use some kind of asynchronous alert or polling check to clean up everything related to your deployment. You may want to have a deep look at orchestrators like Airflow (Composer) and use several tools such as Helm for keeping deployments easy.

Fastapi scaleup multi-tennent application

I am trying to understand how to scale up Fastapi on our app. We have currently application developed like into snippet code bellow. So we dont use async calls. Our application is multi-tennent and we expect to load big requests (~10mbs) per requests.
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def root():
psycopg2 queries select ... Query last 2-3 minutes or ml model
return {"message": "Hello World"}
When the API call is made another user is wating to start doing requests which is what we dont want. I can increase from 1 worker to 4-6 workers (guvicorn). So than 4-6 users can use app independently. Does it means that we can handle 4-6x workers more or is it less ?
We were thinking to change to async and uses async postgres drivers (asyncio) we could get more throughtput. I assume than will be database bottnlneck soon ? Also we did some performance testing and this approach would decrease time on half according to our tests.
How can we scale up our apllication further if we want in peak times handle 1000 users at same time ? What should we take into consideration ?
First of all: Does this processing need to be sync? I mean, is the user waiting for the response of this processing that takes 2-3 minutes? It is not recommended that you have APIs that take that long to respond.
If your user doesn't need to wait until it finishes, you have a few options:
You can use celery and make this processing async using a background tasks. Celery is commonly used for this kind of things where you have huge queries or huge processing that takes a while and that can be done async.
You can also use the background task from FastAPI that allows you to run things on background.
If we do it this way you will be able to easily scale your application. Note that celery currently doesn't support async, so you would not be able to use async there unless you implement a few tweaks yourself.
About scaling the number of workers - FastAPI recommends that you use your container structure to manage the number of replicas running, so instead of having gunicorn, you could simply scale the number of replicas of your service. If you are not using containers, then you can use a structure from gunicorn that allows you to automatically spins up new workers based on the number of requests that you are receiving.
If none of my answers above make sense for you, I'd suggest:
Use the async driver from Postgres so while it is running and processing your query FastAPI will be able to receive requests from other users. Note that if your query is huge, you might need a lot of memory to do what you are saying.
Create some sort of auto scaling based on response time/requests per second so you can scale your application as you receive more requests

Reliably running hundreds of scheduled functions every minute

I am building an application that will need to run hundreds of short running tasks every minute. These functions are not doing anything special other than making calls to an HTTP endpoint. I need a reliable mechanism for scheduling these invocations every minute indefinitely. Failures to run at the scheduled time cannot be tolerated. I have considered the following options for the scheduler:
AWS Lambda
Mesosphere Chronos
Cron
Python Celery
Obviously there is a trade off between cost, maintainability (I will need to update the logic of these functions every once in a while), and reliability.
My question is, which of these options would be the most appropriate if I am most concerned about consistency/reliability? Are there options I'm missing that I should consider?
As you already mentioned, there are multiple technologies that could help you do this, I would say that the trick is more to find the logic flow/model to use.
For example, If the number of tasks are not fixed, a publish/subscribe pattern could apply, for this something like rabbitMQ or AWS SQS could be used.
There are multiple ways about how to submit a task to the queue and also how to de-queue, you could have multiple workers reading/waiting for events in where they could read one by one or by chunks (based on the num of cores per server) all this bound to the speed and precision you may want.
Scaling I would say is easier since if need more speed (precision to do all tasks every minute) just need to add more workers.
For more ideas check this article Using AWS Lambda with Amazon DynamoDB it covers a stream-based model / event-sourcing.

Biztalk - How to throttle a streaming disassemble pipeline

I need to limit the number of orchestration instances spawned while debatching a large message in a streaming disassemble receive pipeline. Let’s say that I have a large xml coming in that contains 100 000 separate "Order" message. The receive pipeline would then debatch it and create 100 000 "ProcessOrder" orchestrations. This is too much and I need to limit that.
Requirements
The debatching needs to be done in a streaming manner so that I only load one "Order" message in memory at a time before sending it to the messagebox;
The debatching needs to be throttled based on the number of current running "ProcessOrder" orchestration instances (say if I already have 100 running instances, the debatching would wait till one is over to send another "Order" message to the messagebox).
Where I'm at
I have the receive pipeline that does the debatching and functional modifications to my messages. It does what it should in a streaming manner and puts individual messages in VirtualStreams;
I have an orchestration and helper methods that can limit the number of “ProcessOrder” orchestration instances.
The problem
I know that I can run a receive pipeline inside an orchestration (and that would solve my problem since on every "getnext" call to the pipeline, I could just hold on if there are too many running orchestration instances) but, digging in biztalk dlls, I noticed that using Microsoft.XLANGs.Pipeline.XLANGPipelineManager still loads up all the messages in memory instead of enumerating them like Microsoft.BizTalk.PipelineOM.PipelineManager does. I know they are putting every messages in VirtualStream but this is still inadequate, memory wise, for such a large message number.
Question
My next step would be to run the receive pipeline directly in the receive port (so it would use Microsoft.BizTalk.PipelineOM.PipelineManager) without having the orchestration that limits the number of “ProcessOrder” instances, but to meet the requirements, I would need to add a delay logic in my pipeline. Is this a viable option? If not, why? and what other alternative do I have?
You should debatch all messages once from pipeline and store those individual messages in MSMQ before even they are processed by orchestration. Use standard pipeline to debatch messages as they are efficient to handle large files debatching. MSMQ is available for free through Turn On Windows Features. Using MSMQ is very easy and does not require any development. Sending to MSMQ will be very fast 100K messages is not issue at all.
Then have a receive location to read from MSMQ. Depending on your orchestration throughput, you can control message flow by using BizTalk receive host throttling or by receiving the messages from MSMQ in Order or using the combination of both. Make sure you have separate host instance for both receive MSMQ and send MSMQ and for your orchestration processing.
This will be done through all configurations without any extra code simplifing your design. Make sure you have orchestration with minimum number of persistent points.

Service with background jobs, how to ensure jobs only run periodically ONCE per cluster

I have a play framework based service that is stateless and intended to be deployed across many machines for horizontal scaling.
This service is handling HTTP JSON requests and responses, and is using CouchDB as its data store again for maximum scalability.
We have a small number of background jobs that need to be run every X seconds across the whole cluster. It is vital that the jobs do not execute concurrently on each machine.
To execute the jobs we're using Actors and the Akka Scheduler (since we're using Scala):
Akka.system().scheduler.schedule(
Duration.create(0, TimeUnit.MILLISECONDS),
Duration.create(10, TimeUnit.SECONDS),
Akka.system().actorOf(LoggingJob.props),
"tick")
(etc)
object LoggingJob {
def props = Props[LoggingJob]
}
class LoggingJob extends UntypedActor {
override def onReceive(message: Any) {
Logger.info("Job executed! " + message.toString())
}
}
Is there:
any built in trickery in Akka/Actors/Play that I've missed that will do this for me?
OR a recognised algorithm that I can put on top of Couchbase (distributed mutex? not quite?) to do this?
I do not want to make any of the instances 'special' as it needs to be very simple to deploy and manage.
Check out Akka's Cluster Singleton Pattern.
For some use cases it is convenient and sometimes also mandatory to
ensure that you have exactly one actor of a certain type running
somewhere in the cluster.
Some examples:
single point of responsibility for certain cluster-wide consistent decisions, or coordination of actions across the cluster system
single entry point to an external system
single master, many workers
centralized naming service, or routing logic
Using a singleton should not be the first design choice. It has
several drawbacks, such as single-point of bottleneck. Single-point of
failure is also a relevant concern, but for some cases this feature
takes care of that by making sure that another singleton instance will
eventually be started.