I have an e-commerce app. When user add/remove items I need to have them logged into my remote database. Each transaction is important and I have to make sure that I have logged them in the same order it had happened. However it slows my app and the round trip to and from database takes time.
I found out that persisting the transactions locally would be a good idea to solve the delay.Using Flutter-Hive DB my local persistence works fine and it improved the app performance.
Now I need your recommendation for sending these transactions to my back-end as and when it happens. The remote DB side of the code should run in the background without making the user wait for the confirmation. Any help, guidelines or pointers will be highly appreciated.
you can try workmanager
void callbackDispatcher() {
Workmanager.executeTask((task) {
//Write codes to perform required tasks
return Future.value(true);
});
}
Workmanager.initialize(
callbackDispatcher, //the top level function.
isInDebugMode: true //If enabled it will post a notification whenever the job is running. Handy for debugging jobs
)
For periodic tasks you can use
// Periodic task registration
Workmanager().registerPeriodicTask(
"periodic-task-identifier",
"simplePeriodicTask",
// When no frequency is provided the default 15 minutes is set.
// Minimum frequency is 15 min. Android will automatically change your frequency to 15 min if you have configured a lower frequency.
frequency: Duration(hours: 1),
)
Related
I am trying to understand how to scale up Fastapi on our app. We have currently application developed like into snippet code bellow. So we dont use async calls. Our application is multi-tennent and we expect to load big requests (~10mbs) per requests.
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def root():
psycopg2 queries select ... Query last 2-3 minutes or ml model
return {"message": "Hello World"}
When the API call is made another user is wating to start doing requests which is what we dont want. I can increase from 1 worker to 4-6 workers (guvicorn). So than 4-6 users can use app independently. Does it means that we can handle 4-6x workers more or is it less ?
We were thinking to change to async and uses async postgres drivers (asyncio) we could get more throughtput. I assume than will be database bottnlneck soon ? Also we did some performance testing and this approach would decrease time on half according to our tests.
How can we scale up our apllication further if we want in peak times handle 1000 users at same time ? What should we take into consideration ?
First of all: Does this processing need to be sync? I mean, is the user waiting for the response of this processing that takes 2-3 minutes? It is not recommended that you have APIs that take that long to respond.
If your user doesn't need to wait until it finishes, you have a few options:
You can use celery and make this processing async using a background tasks. Celery is commonly used for this kind of things where you have huge queries or huge processing that takes a while and that can be done async.
You can also use the background task from FastAPI that allows you to run things on background.
If we do it this way you will be able to easily scale your application. Note that celery currently doesn't support async, so you would not be able to use async there unless you implement a few tweaks yourself.
About scaling the number of workers - FastAPI recommends that you use your container structure to manage the number of replicas running, so instead of having gunicorn, you could simply scale the number of replicas of your service. If you are not using containers, then you can use a structure from gunicorn that allows you to automatically spins up new workers based on the number of requests that you are receiving.
If none of my answers above make sense for you, I'd suggest:
Use the async driver from Postgres so while it is running and processing your query FastAPI will be able to receive requests from other users. Note that if your query is huge, you might need a lot of memory to do what you are saying.
Create some sort of auto scaling based on response time/requests per second so you can scale your application as you receive more requests
We once daily create a Firestore document (~4kB) which gets updated in regular intervals (every 10 seconds) in a timeframe of 15 minutes. Our iOS clients subscribe to changes on the document using addSnapshotListener and subsequently write changes to a subcollection of the document.
This setup has been running smoothly for about a year now. Since we have experienced a significant increase in new clients (from 300 to ~1.5k), once during the 15 minutes, most of the clients won't receive snapshot updates for about 20-30 seconds. Before and after this occurrence snapshots are received as expected.
So far we haven't been able to reproduce this issue using only a handful of clients. We're still working on implementing a stress test to try to reproduce the observed behavior.
Could this in some form be caused by automatic scaling to accommodate the number of clients listening for snapshots?
Any suggestions would be greatly appreciated.
Versions
Xcode version: 13.0 (13A233)
Firebase SDK version: 7.11.0
Listener Initialization
init(subscriber: S, ref: DocumentReference) {
self.subscriber = subscriber
self.listener = ref.addSnapshotListener(includeMetadataChanges: true) { [weak self] snapshot, error in
self?.receive(snapshot: snapshot, error: error)
}
}
Update
After further testing we don't think the issue is caused by unexpected behavior of our client code or the firebase iOS SDK.
Rather it seems that what our clients are experiencing is caused by write operation spikes exceeding the recommended limit of 1,000 writes per second on a subcollection of the document the snapshot listeners are attached to.
The behavior of snapshot listeners not receiving updates for ~25s does not always occur and so far we still have not been able to reproduce the behavior in testing using the same document structure and 2,000 writes per second.
The guidlines state:
Keep the rate of write operations for an individual collection under 1,000 operations/second.
Our plan to fix the issue is manually sharding the collection, so write operations will never exceed the limit for a single collection. This solution should be viable until we reach 10,000 clients and therefore reach the limit of 10,000 write operations per second for the database.
Incorrect handling of the 500/50/5 rule is also something we're still monitoring.
In My Project I am using Quartz.net Scheduler (3.0.7), Now There are some automated verification process which reads the DB and process it and generate output based on few conditions, (You can take example of Email Sending Mechanism which sends email read from DB and Send to Respective mail address) Now If we assume There are 300 Request to be processed and each will take long time to complete, Now There is one feature required which pause the current execution of the job, what i want is that if from 300 requests 25 is completed and currently 26 is running so the job should complete the 26th execution but should stop rest of the request.
What I have tried is to implement the Pause and Interrupt methods of Quartz.net
i.e. await scheduler.PauseJob(jobKey); &
await scheduler.Interrupt(jobKey);
Which Can Pause the upcoming executions, If I can get any Event or Token into Job Execution Class, I can achieve what i want.
IInterruptableJob Has been removed from the Quartz.net
If anyone can help me on this.
From the migration guide:
IInterruptableJob interface has been removed. You need to check for IJobExecutionContext’s CancellationToken.IsCancellationRequested to determine whether job interruption has been requested.
So combining the pause and observing the token should work.
Last week encountered for the first time a rate limit exceed error (4003) in our nightly batch-process. This batch proces is synchronising Smartsheet objects with our TimeTracking application 4TT.
Since 2016 this proces works fine, but somehow now this rate limit error occurs and therefore stops synchronising. With the help of the API (and blog about rate limit) I managed to change the code, putting in pauses when this error occurs. This has taken me quite a lot of time, as every time the error occured in a different part of the synchronisation proces.
Is there or will there be a way to let the API automatically pauses, when the rate limit is about to exceed in stead of changing the code every time. And for those who don't want this feature, for example adding an optional boolean argument 'AutomaticallyPauseWhenRateLimitExceeds' (default false) when making the connection to the Smartsheet API?
You'll need to include logic in your code to effectively handle the rate limiting error -- there's no mechanism by which the Smartsheet API can automatically handle this situation for you.
A simple approach would be for you to include logic in your code such that when a rate limiting error is thrown, your code pauses execution for 60 seconds before continuing. Alternatively, a more sophisticated approach would be to implement exponential backoff logic in your code (an error handling strategy whereby you periodically retry a failed request with progressively longer wait times between retries, until either the request succeeds or the certain number of retry attempts is reached).
Implementing this type of error handling logic should not be difficult or tedious, provided that your code is structured in an efficient manner (i.e., error handling logic is encapsulated in a single location).
Additional note: The Smartsheet API Best Practices blog post (specifically the Be practical: Adhere to rate limiting guidelines section) contains info about this topic.
All our SDKs include error retry. So that's the easiest way to handle this situation. http://smartsheet-platform.github.io/api-docs/#sdks-and-sample-code
I found this and other interesting problems (in my lab) while updating the sheet including Poor Internet connection/bandwidth issues.
If unable to accommodate your code to process chunks of data, my suggestion is to use a simple Try/Catch logic to pause the thread/task for 60 secs and then try again.
using System.Threading
...
... //all your code goes here
...
try
{
// your code to Save/update the Sheet goes here
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Thread.Sleep(60000);
}
The next step is to work notifications when those errors happen
I'm trying to understand some best practices for service fabric.
If I have a queue that is added to by a web service or some other mechanism and a back end task to process that queue what is the best approach to handle long running operations in the background.
Use TryPeekAsync in one transaction, process and then if successful use TryDequeueAsync to finally dequeue.
Use TryDequeueAsync to remove an item, put it into a dictionary and then remove from the dictionary when complete. On startup of the service, check the
dictionary for anything pending before the queue.
Both ways feel slightly wrong, but I can't work out if there is a better way.
One option is to process the queue in RunAsync, something along the lines of this:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
var store = await StateManager.GetOrAddAsync<IReliableQueue<T>>("MyStore").ConfigureAwait(false);
while (!cancellationToken.IsCancellationRequested)
{
using (var tx = StateManager.CreateTransaction())
{
var itemFromQueue = await store.TryDequeueAsync(tx).ConfigureAwait(false);
if (!itemFromQueue.HasValue)
{
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken).ConfigureAwait(false);
continue;
}
// Process item here
// Remmber to clone the dequeued item if it is a custom type and you are going to mutate it.
// If success, await tx.CommitAsync();
// If failure to process, either let it run out of the Using transaction scope, or call tx.Abort();
}
}
}
Regarding the comment about cloning the dequeued item if you are to mutate it, look under the "Recommendations" part here:
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-reliable-collections/
One limitation with Reliable Collections (both Queue and Dictionary), is that you only have parallelism of 1 per partition. So for high activity queues it might not be the best solution. This might be the issue you're running into.
What we've been doing is to use ReliableQueues for situations where the write amount is very low. For higher throughput queues, where we need durability and scale, we're using ServiceBus Topics. That also gives us the advantage that if a service was Stateful only due to to having the ReliableQueue, it can now be made stateless. Though this adds a dependency to a 3rd party service (in this case ServiceBus), and that might not be an option for you.
Another option would be to create a durable pub/sub implementation to act as the queue. I've done tests before with using actors for this, and it seemed to be a viable option, without spending too much time on it, since we didn't have any issues depending on ServiceBus. Here is another SO about that Pub/sub pattern in Azure Service Fabric
If very slow use 2 queues.. One a fast one where you store the work without interruptions and a slow one to process it. RunAsync is used to move messages from the fast to the slow.