Is there a way to limit the number of call simultaneously redux toolkit? - redux-toolkit

I'm building a web app with redux toolkit query. I notice that on my landing page there are a lot of queries that gets trigger. Is there a way to tell redux toolkit query to execute only 2 queries at giving time. So it has to wait until one finishes to go to the next query.
Thanks

When you execute the queries you want to delay, you can specify a skip option
const { data, error, status } = useYourQuery(name, {
skip: yourCondition,
});
If skip is true, the query will not trigger

You can limit the amount of concurrent executions by using useLazyQuery and checking the amount of concurrent queries running in your store each time a new request is needed.
Let's say you have a query named FetchData and a API Slice named FetchApi:
const [trigger, result] = useLazyFetchDataQuery();
The code above instantiates a trigger function and the query promise (uninitialized).
Now, before executing the trigger and initizalizing the query, we need to check for queries running inside the Redux Store:
const store = UseStore();
const queriesBeingExecuted = Object.values(store.fetchApi.queries).filter(
request => request.status === 'pending'
).length;
And then finally, trigger the request:
const maxConcurrentQueries = 2
if (queriesBeingExecuted < maxConcurrentQueries && result.isUninitialized) {
trigger(yourQueryArguments, true);
}
This will ensure that always only 2 queries will be running concurrently.

Related

Flutter - SQflite : Performance issues while bulk insertion

Using the below approach for bulk insertions, getting performance issues in flutter app
Future<List<Object?>> bulkInsert(String tableName,List<Map<String,dynamic>> rowList) async{
final db = await instance;
int dbSaveResult = 0;
if(db!=null){
await db.transaction((txn)async {
var batch = txn.batch();
for (var rowData in rowList) {
try {
batch.insert(tableName, rowData,
conflictAlgorithm: ConflictAlgorithm.replace);
}
catch(exception){
throw "some error while insertion";
}
}
await batch.commit(continueOnError: false);
});
}
return [];}
List<Map<String,dynamic>> : Map contains 4 pairs of keys/value and list length around number of 13032, so total execution time of bulkInsert() method is 3.274 seconds in flutter.
while the same approach with same data, we are using in native [transaction support using sqlite c library] for insert/update purpose then it takes around 210 milliseconds only.
any reason, why flutter based solutions taking time? or anything wrong with given code??
Please help me with best approach if I miss something.
There is nothing wrong with your code.
Possible optimization:
Use the noResult: true option during commit. It avoid an extra query after each insertion. You will likely get a 50% gain
You can try to use sqflite_common_ffi along with flitter3_sqlite_libs (instructions). In my experiment it is about twice faster
You can run the solution above without running the sqlite statements in a separate isolate (but that could hang the UI)
A quick benchmark I tried (13000 records, 4 fields, Pixel 4a) gives this result:
sqflite io
insert: 0:00:04.074015
insert (noResult): 0:00:02.133279
sqflite_ffi io
insert: 0:00:01.915051
insert (noResult): 0:00:01.319466
sqflite_ffi (no isolate) io
insert: 0:00:01.420478
insert (noResult): 0:00:01.058417
Flutter services (for sqflite) and cross isolation communication (for sqlite ffi) are likely the main bottleneck. You could try to use sqflite3 package (i.e. without sqflite package) directly for even better performance.

Prisma: very slow nested writes when connecting with remote DB Postrgsql on AWS

I am writing a very basic query with prisma:
async createContext(contextData: CreateContextDto): Promise<ContextRO> {
const statements = contextData.body
.split('\n')
.filter((statement) => statement !== '')
.map((statement) => ({ content: statement }));
const context = await this.prisma.context.create({
data: {
contextName: contextData.name,
userId: contextData.user,
statements: {
create: statements,
},
},
include: {
statements: true,
},
});
return { context };
With local PostgreSQL the same query takes around 4s. When connecting to PostgreSQL on AWS it goes up to 90 seconds.
Any ideas why is it taking so long?
Please find an example repo reproducing this issue.
And cli output when running Prisma with 'DEBUG=*'
ps. if I run the same query with typeorm with PostgreSQL on aws, it takes 1-2 seconds so it is not a problem with deployment. (check branch "typeorm" to see the comparison)
You should use createMany instead of create. create uses a separate insert under the hood for every single nested write. If there are a lot of statements connected to one context record, you're making a lot of separate queries to the remote database, which is quite slow.
What you can do is:
Use create to create one context record, without the nested statement records.
Use a separate createMany for the statement records, manually specifying the contextId using the id you got from step 1.
You could also wrap queries 1 and 2 in a transaction, if you think that's appropriate.

How to lock a row while updating it in SqlAlchemy Core?

It is incredibly difficult to unit test race conditions like this. I was hoping to verify this with experts here.
I have the following scenario where I would like to obtain the first VPN Profile that is not assigned to any device.
In the meanwhile any other process trying to obtain the same profile (since it's the next in line), should wait here until my transaction has completed.
I would then update this vpn profile and assign the given device id to it and finish the transaction.
At this point any process that was waiting on the select().first() statement shouldn't be obtaining this particular VPN profile, because it has already been assigned to a device_id. Instead it should obtain the next available one.
After some digging, this is the code I have come up with. I'm using with_for_update() in the select statement and keep everything within the same engine.begin() transaction. I'm using SQLAlchemy Core without ORM.
async with engine.begin() as conn:
query = (
VpnProfileTable.select()
.where(
VpnProfileTable.c.device_id == None,
)
.with_for_update(nowait=False)
)
record = (await conn.execute(query)).first()
if record:
query = (
VpnProfileTable.update()
.where(VpnProfileTable.c.id == record.id)
.values(device_id=device_id)
)
await conn.execute(query)
await conn.commit()
Is my code reflecting what I'm trying to achieve?
Not sure if I need to commit(), since there is already a with engine.begin() statement. It should happen automatically at the end.
Many thanks

How to optimise this ef core query?

I'm using EF Core 3.0 code first with MSSQL database. I have big table that has ~5 million records. I have indexes on ProfileId, EventId and UnitId. This query takes ~25-30 seconds to execute. Is it normal or there is a way to optimize it?
await (from x in _dbContext.EventTable
where x.EventId == request.EventId
group x by new { x.ProfileId, x.UnitId } into grouped
select new
{
ProfileId = grouped.Key.ProfileId,
UnitId = grouped.Key.UnitId,
Sum = grouped.Sum(a => a.Count * a.Price)
}).AsNoTracking().ToListAsync();
I tried to loos through profileIds, adding another WHERE clause and removing ProfileId from grouping parameter, but it worked slower.
Capture the SQL being executed with a profiling tool (SSMS has one, or Express Profiler) then run that within SSMS /w execution plan enabled. This may highlight an indexing improvement. If the execution time in SSMS roughly correlates to what you're seeing in EF then the only real avenue of improvement will be hardware on the SQL box. You are running a query that will touch 5m rows any way you look at it.
Operations like this are not that uncommon, just not something that a user would expect to sit and wait for. This is more of a reporting-type request so when faced with requirements like this I would look at options to have users queue up a request where they can receive a notification when the operation completes to fetch the results. This would be set up to prevent users from repeatedly requesting updates ("not sure if I clicked" type spams) or also considerations to ensure too many requests from multiple users aren't kicked off simultaneously. Ideally this would be a candidate to run off a read-only reporting replica rather than the read-write production DB to avoid locks slowing/interfering with regular operations.
Try to remove ToListAsync(). Or replace it with AsQueryableAsync(). Add ToList slow performance down.
await (from x in _dbContext.EventTable
where x.EventId == request.EventId
group x by new { x.ProfileId, x.UnitId } into grouped
select new
{
ProfileId = grouped.Key.ProfileId,
UnitId = grouped.Key.UnitId,
Sum = grouped.Sum(a => a.Count * a.Price)
});

Row-Level Update Lock using System.Transactions

I have a MSSQL procedure with the following code in it:
SELECT Id, Role, JurisdictionType, JurisdictionKey
FROM
dbo.SecurityAssignment WITH(UPDLOCK, ROWLOCK)
WHERE Id = #UserIdentity
I'm trying to move that same behavior into a component that uses OleDb connections, commands, and transactions to achieve the same result. (It's a security component that uses the SecurityAssignment table shown above. I want it to work whether that table is in MSSQL, Oracle, or Db2)
Given the above SQL, if I run a test using the following code
Thread backgroundThread = new Thread(
delegate()
{
using (var transactionScope = new TrasnsactionScope())
{
Subject.GetAssignmentsHavingUser(userIdentity);
Thread.Sleep(5000);
backgroundWork();
transactionScope.Complete();
}
});
backgroundThread.Start();
Thread.Sleep(3000);
var foregroundResults = Subject.GetAssignmentsHavingUser(userIdentity);
Where
Subject.GetAssignmentsHavingUser
runs the sql above and returns a collection of results and backgroundWork is an Action that updates rows in the table, like this:
delegate
{
Subject.UpdateAssignment(newAssignment(user1, role1));
}
Then the foregroundResults returned by the test should reflect the changes made in the backgroundWork action.
That is, I retrieve a list of SecurityAssignment table rows that have UPDLOCK, ROWLOCK applied by the SQL, and subsequent queries against those rows don't return until that update lock is released - thus the foregroundResult in the test includes the updates made in the backgroundThread.
This all works fine.
Now, I want to do the same with database-agnostic SQL, using OleDb transactions and isolation levels to achieve the same result. And I can't for the life of me, figure out how to do it. Is it even possible, or does this row-level locking only apply at the db level?