I have the following query in Prisma that basically returns all users where campaign id is one from the array I provide and they are added to the system within the defined time range. Also I have another entity Click for each user that should be included in the response.
const users = await this.prisma.user.findMany({
where: {
campaign: {
in: [
...campaigns.map((campaign) => campaign.id),
...campaigns.map((campaign) => campaign.name),
],
},
createdAt: {
gte: dateRange.since,
lt: dateRange.until,
},
},
include: {
clicks: true,
},
});
The problem is this query runs fine in localhost where I don't have much data, but in the production database there are nearly 500.000 users and 250.000 clicks in total, so I am not sure if that is the root case but the query fails with the following exception:
Error:
Invalid `this.prisma.user.findMany()` invocation in
/usr/src/app/dist/xyx/xyx.service.js:135:58
132 }
133 async getUsers(campaigns, dateRange) {
134 try {
→ 135 const users = await this.prisma.user.findMany(
Can't reach database server at `xyz`:`25060`
Please make sure your database server is running at `xyz`:`25060`.
Prisma error code is P1001.
xyz replaced for obvious reasons in the paths and connection string to the DB.
the only solution we found is to check what is the limit for your query and then use pagination (skip, take) params in loop to download data part by part and glue them back together then ... not optimal, but, it works. See existing bug report for example
https://github.com/prisma/prisma/issues/8832
Related
Using prisma I am trying to write some tests, however the findFirstOrThrow method (https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#findfirstorthrow) does not seem to throw when nothing should be found. Instead it is returning the first database record it finds.
The following piece of code is what I am testing
console.log('sessionId', this.ctx.session?.user?.id);
const author = await this.db.author.findFirstOrThrow({
where: { userId: this.ctx.session?.user?.id },
select: { id: true },
});
console.log({ author });
And in my test output I get the following logs
RERUN src/api/service.ts x31
stdout | src/api/service.test.ts > BlockService > block creation > without an author
sessionId undefined
stdout | src/api/service.test.ts > BlockService > block creation > without an author
{ author: { id: 'cle79pisg007hb2b8rhpii1ws' } }
So even though the this.ctx.session?.user?.id is undefined, prisma still returns the first author in the table.
What I've tried so far:
When not populating the authors table in the test it will throw.
When populating the authors table with a single author it will return this author
When giving an explicit undefined as the userId it will still returns the first record
edit: I use prisma ^4.8.0
This is the expected behaviour.
If you will pass undefined then it is equivalent to not passing any userId.
So, the query is equivalent to the following:
const author = await this.db.author.findFirstOrThrow({
where: { },
select: { id: true },
});
And this query would return the first record from database.
For reference, here is the section that defines this behaviour.
I need to add a column to my table of riders, allowing us to store the name of the image that will display on that rider's card. I then need to update all of the records with the auto-generated image names.
I've done a bunch of searching, and all roads seem to lead back to this thread or this one. I've tried the code from both of these threads, swapping in my own table and column names, but I still can't get it to work.
This is the latest version of the code:
export async function up(knex, Promise) {
return knex.transaction(trx => {
const riders = [
{
name: 'Fabio Quartararo',
card: 'rider_card_FabioQuartararo'
},
...24 other riders here...
{
name: 'Garrett Gerloff',
card: 'rider_card_GarrettGerloff'
},
];
return knex.schema.table('riders', (table) => table.string('card')).transacting(trx)
.then(() =>{
const queries = [];
riders.forEach(rider => {
const query = knex('riders')
.update({
card: rider.card
})
.where('name', rider.name)
.transacting(trx); // This makes every update be in the same transaction
queries.push(query);
});
Promise.all(queries) // Once every query is written
.then(() => trx.commit) // We try to execute all of them
.catch(() => trx.rollback); // And rollback in case any of them goes wrong
});
});
}
When I run the migration, however, it fails with the following error:
migration file "20211202225332_update_rider_card_imgs.js" failed
migration failed with error: Cannot read properties of undefined (reading 'all')
Error running migrations: TypeError: Cannot read properties of undefined (reading 'all')
at D:\Users\seona\Documents\_Blowfish\repos\MotoGP\dist\database\migrations\20211202225332_update_rider_card_imgs.js:134:25
at processTicksAndRejections (node:internal/process/task_queues:96:5)
So it's clearly having some sort of problem with Promise.all(), but I can't for the life of me figure out what. Searching has not turned up any useful results.
Does anyone have any ideas about how I can get this working? Thanks in advance.
I think you might be following some older documentation and/or examples (at least that's what I was doing).
The Promise argument is no longer passed into the migration up and down functions.
So, the signature should be something like this:
function up(knex) {
// Use the built in Promise class
Promise.all(<ARRAY_OF_QUERY_PROMISES>);
...
}
I'm new to Rust and I'm using the default MongoDB driver
https://docs.rs/mongodb/2.0.0/mongodb/
I remember when coding with Node.js, there was a possibility to send transactions with some Promise.all() in order to execute all transactions at the same time for optimization purposes, and if there are no errors, to make a commit to the transaction.
(Node.js example here: https://medium.com/#alkor_shikyaro/transactions-and-promises-in-node-js-ca5a3aeb6b74)
I'm trying to implement the same logic in Rust now, using try_join! but I'm always opposed to the problem:
error: cannot borrow session as mutable more than once at a time;
label: first mutable borrow occurs here
use mongodb::{bson::oid::ObjectId, Client, Database, options};
use async_graphql::{
validators::{Email, StringMaxLength, StringMinLength},
Context, ErrorExtensions, Object, Result,
};
use futures::try_join;
//use tokio::try_join; -> same thing
#[derive(Default)]
pub struct UserMutations;
#[Object]
impl UserMutations {
async fn user_followed<'ctx>(
&self,
ctx: &Context<'ctx>,
other_user_id: ObjectId,
current_user_id: ObjectId,
) -> Result<bool> {
let mut session = Client::with_uri_str(dotenv!("URI"))
.await
.expect("DB not accessible!")
.start_session(Some(session_options))
.await?;
session.start_transaction(Some(options::TransactionOptions::builder()
.read_concern(Some(options::ReadConcern::majority()))
.write_concern(Some(
options::WriteConcern::builder()
.w(Some(options::Acknowledgment::Majority))
.w_timeout(Some(Duration::new(3, 0)))
.journal(Some(false))
.build(),
))
.selection_criteria(Some(options::SelectionCriteria::ReadPreference(
options::ReadPreference::Primary
)))
.max_commit_time(Some(Duration::new(3, 0)))
.build())).await?;
let db = Client::with_uri_str(dotenv!("URI"))
.await
.expect("DB not accessible!").database("database").collection::<Document>("collection");
try_join!(
db.update_one_with_session(
doc! {
"_id": other_user_id
},
doc! {
"$inc": { "following_number": -1 }
},
None,
&mut session,
),
db.update_one_with_session(
doc! {
"_id": current_user_id
},
doc! {
"$inc": { "followers_number": -1 }
},
None,
&mut session,
)
)?;
Ok(true)
}
}
849 | | &mut session,
| | ------------ first mutable borrow occurs here
... |
859 | | &mut session,
| | ^^^^^^^^^^^^ second mutable borrow occurs here
860 | | )
861 | | )?;
| |_____________- first borrow later captured here by closure
Is there any way to send transaction functions sync to not lose any time on independent mutations? Does anyone have any ideas?
Thanks in advance!
Thanks, Patrick and Zeppi for your answers, I did some more research on this topic and also did my own testing. So, let's start.
First, my desire was to optimize transactional writes as much as possible, since I wanted the complete rollback possibility required by code logic.
In case you missed my comments to Patrick, I'll restate them here to better reflect what was my way of thinking about this:
I understand why this would be a limitation for multiple reads, but if
all actions are on separate collections (or are independent atomic
writes to multiple documents with different payloads) I don't see why
it's impossible to retain casual consistency while executing them
concurrently. This kind of transaction should never create race
conditions / conflicts / weird lock behaviour, and in case of error
the entire transaction is rolled back before being committed anyways.
Making an analogy with Git (which might be wrong), no merge conflicts
are created when separate files / folders are updated. Sorry for being
meticulous, this just sounds like a major speed boost opportunity.
But, after lookups I was opposed to this documentation:
https://github.com/mongodb/specifications/blob/master/source/sessions/driver-sessions.rst#why-does-a-network-error-cause-the-serversession-to-be-discarded-from-the-pool
An otherwise unrelated operation that just happens to use that same
server session will potentially block waiting for the previous
operation to complete. For example, a transactional write will block a
subsequent transactional write.
Basically, this means that even if you will send transaction writes concurrently, you won't gain much efficiency because MongoDB itself is a blocker. I decided to check if this was true, and since NodeJS driver setup allows to send transactions concurrently (as per: https://medium.com/#alkor_shikyaro/transactions-and-promises-in-node-js-ca5a3aeb6b74) I did a quick setup with NodeJS pointing to the same database hosted by Atlas in the free tier.
Second, statistics and code: That's the NodeJS mutation I will be using for tests (each test has 4 transactional writes). I enabled GraphQL tracing to benchmark this, and here are the results of my tests...
export const testMutFollowUser = async (_parent, _args, _context, _info) => {
try {
const { user, dbClient } = _context;
isLoggedIn(user);
const { _id } = _args;
const session = dbClient.startSession();
const db = dbClient.db("DB");
await verifyObjectId().required().validateAsync(_id);
//making sure asked user exists
const otherUser = await db.collection("users").findOne(
{ _id: _id },
{
projection: { _id: 1 }
});
if (!otherUser)
throw new Error("User was not found");
const transactionResult = session.withTransaction(async () => {
//-----using this part when doing concurrency test------
await Promise.all([
await createObjectIdLink({ db_name: 'links', from: user._id, to: _id, db }),
await db.collection('users').updateOne(
{ _id: user._id },
{ $inc: { following_number: 1 } },
),
await db.collection('users').updateOne(
{ _id },
{
$inc: { followers_number: 1, unread_notifications_number: 1 }
},
),
await createNotification({
action: 'USER_FOLLOWED',
to: _id
}, _context)
]);
//-----------end of concurrency part--------------------
//------using this part when doing sync test--------
//this as a helper for db.insertOne(...)
const insertedId = await createObjectIdLink({ db_name: 'links', from: user._id, to: _id, db });
const updDocMe = await db.collection('users').updateOne(
{ _id: user._id },
{ $inc: { following_number: 1 } },
);
const updDocOther = await db.collection('users').updateOne(
{ _id },
{
$inc: { followers_number: 1, unread_notifications_number: 1 }
},
);
//this as another helper for db.insertOne(...)
await createNotification({
action: 'USER_FOLLOWED',
to: _id
}, _context);
//-----------end of sync part---------------------------
return true;
}, transactionOptions);
if (transactionResult) {
console.log("The reservation was successfully created.");
} else {
console.log("The transaction was intentionally aborted.");
}
await session.endSession();
return true;
}
And related performance results:
format:
Request/Mutation/Response = Total (all in ms)
1) For sync writes in the transaction:
4/91/32 = 127
4/77/30 = 111
7/71/7 = 85
6/66/8 = 80
2/74/9 = 85
4/70/8 = 82
4/70/11 = 85
--waiting more time (~10secs)
9/73/34 = 116
totals/8 = **96.375 ms in average**
//---------------------------------
2) For concurrent writes in transaction:
3/85/7 = 95
2/81/14 = 97
2/70/10 = 82
5/81/11 = 97
5/73/15 = 93
2/82/27 = 111
5/69/7 = 81
--waiting more time (~10secs)
6/80/32 = 118
totals/8 = ** 96.75 ms ms in average **
Conclusion: the difference between the two is within the margin of error (but still on the sync side).
My assumption is with the sync way, you're spending time to wait for DB request/response, while in a concurrent way, you're waiting for MongoDB to order the requests, and then execute them all, which at the end of the day will cost the same time.
So with current MongoDB policies, I guess, the answer to my question will be "there is no need for concurrency because it won't affect the performance anyway." However, it would be incredible if MongoDB would allow parallelization of writes in transactions in future releases with locks on document level (at least for WiredTiger engine) instead of database level, as it is currently for transactions (because you're waiting for the whole write to finish until next one).
Feel free to correct me if I missed/misinterpreted something. Thanks!
This limitation is actually by design. In MongoDB, client sessions cannot be used concurrently (see here and here), and so the Rust driver accepts them as &mut to prevent this from happening at compile time. The Node example is only working by chance and is definitely not recommended or supported behavior. If you would like to perform both updates as part of a transaction, you'll have to run one update after the other. If you'd like to run them concurrently, you'll need to execute them without a session or transaction.
As a side note, a client session can only be used with the client that it was created from. In the provided example, the session is being used with a different one, which will cause an error.
working with a MEAN Stack and I have three GET requests for the same URL/Route. One is to get a generalised summary of long-term emotions, the other is to get a summary of emotions by dates entered, and lastly, a summary of emotions related to a user-entered tag associated with individual emotion entries.
My first GET request is throwing no issues but the second GET request throws an error: Cannot read property 'length' of undefined
The error points to the following line:
48| each emotion in dateEmotions
Below is the relative code associated with the error:
Jade
each emotion in dateEmotions
.side-emotions-group
.side-emotions-label
p.emotion-left= emotion.emotionName
p.pull-right(class= emotion.emotionLevel) (#{emotion.emotionLevel}%)
.side-emotions-emotion.emotion-left
GET Request
module.exports.emotionsListByDates = function (req, res) {
Emo.aggregate([
{ $match :
{ "date" : { $gte: ISODate("2018-04-09T00:00:00.000Z"), $lt: ISODate("2018-04-13T00:00:00.000Z") } }
}, { "$group": {
"_id": null,
"averageHappiness": {"$avg": "$happiness"},
"averageSadness": {"$avg": "$sadness"},
"averageAnger": {"$avg": "$anger"},
"averageSurprise": {"$avg": "$surprise"},
"averageContempt": {"$avg": "$contempt"},
"averageDisgust": {"$avg": "$disgust"},
"averageFear": {"$avg": "$fear"},
}}
], function (e, docs) {
if (e) {
res.send(e);
} else {
res.render('dashboard', {
title: "ReacTrack - User Dashboard",
pageHeader: {
title: "User Dashboard",
strapline: "View your emotional data here."
},
dateEmotions: docs
})
}
});
};
This question is already getting pretty long, but I have another GET Request pointed to that URL and it is not throwing any errors, and the only difference is that I am not matching the db records by date in that query. I can post the working code if need be.
Edit
After some experimenting, I am able to get each of the three routes working individually if I comment out the other two. It's when multiple routes pull in the multiple requests that causes issues. For example, here are the routes at present where the ctrlDashboard.emotionsListByDates is working:
// Dashboard Routes
//router.get(/dashboard', ctrlDashboard.emotionsListGeneralised);
router.get('/dashboard', ctrlDashboard.emotionsListByDates);
//router.get('/dashboard', ctrlDashboard.emotionsListByTag);
If I comment out two routes and leave one running, and comment out the respective each emotion in emotions each emotion in dateEmotions and each emotion in tagEmotions blocks in the Jade file and leave the correct one uncommented, then that route will work, it seems to be when I am firing multiple routes. Is this bad practice, or incorrect? Should all queries be in the one GET request if on the same URL?
Thanks.
Apologies, new to routing and RESTful APIs but after some researching into the topic, I now understand the fault.
I assumed that the URL used in routing was the URL you wanted the data to populate...which it still kinda is, but I thought if I wanted to populate the dashboard page, I had to use that exact route and I did not realise I could post the data to different URL routes and take the data from those URLs to populate the one page.
Fixed by adding /date and /tag to those routes and using AJAX to perform those requests and populate the main page.
Thanks all.
I have the same problem but I'm using React+Redux+Fetch. So is it not a good practice dispatch more the one request in the same time and from the same page to a specific url?
I would know what causes that problem. I've found some discussions about it could be a mongoose issue.
My code:
MymongooObject.find(query_specifiers, function(err, data) {
for (let i = 0; i < data.length; ++i) {
...
}
}
Error:
TypeError: Cannot read property 'length' of undefined
I'm doing data visualizations using Meteor, running React and D3 for the view. Today decided to populate the MongoDB server with more documents (a total of 30 documents, netting ~50k lines each). There was no issue before running the database with 'only' 4 documents, but now I'm seeing
Exception while polling query {"collectionName":"Metrics","selector":{},"options":{"transform":null}}: MongoError: Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size.
This is my collections.js file, because autopublish is off.
if (Meteor.isServer) {
const remoteCollectionString = "notForHumanConsumption";
database = new MongoInternals.RemoteCollectionDriver(remoteCollectionString);
Meteor.publish('metricsDB', function() {
return Metrics.find({})
});
}
Metrics = new Mongo.Collection("Metrics", { _driver: database });
if (Meteor.isClient) {
Meteor.startup(function() {
Session.set('data_loaded', false);
console.log(Session.get('data_loaded'))
});
Meteor.subscribe('metricsDB', function(){
// Set the reactive session as true to indicate that the data have been loaded
Session.set('data_loaded', true);
console.log(Session.get('data_loaded'))
});
}
Adding any type of sort in the publish function seems to at least get the console to log true, but it does so pretty much instantly. The terminal shows no error, but I'm stuck and not getting any data to the App.
Update: decided to remove 10 entries from the collection, limiting it to 20, and now the collection is ~28MB large, instead of ~42MB for the 30 items. The App is loading, albeit slowly.
The data structure looks pretty much like this:
{
_id: BLAKSBFLUyiy79a6fs9Pjhadkh&SA86886Daksh,
DateQueried: dateString,
DataSet1: [
{
ID,
Name,
Properties: [
{
ID,
Name,
Nr,
Status: [0, 1, 2, 3],
LocationData: {
City,
CountryCode,
PostalCode,
Street,
StreetNumber
}
}
]
}
],
DataSet2: [
{
ID,
Name,
Nr,
Obj: {
NrR,
NrGS,
LengthOfR
}
}
]
}
In each document DataSet1 is usually 9 items long. Properties in there can contain up to ~1900 (they average around 500) items. DataSet2 is normally around 49 items.