What can potentially go wrong if you attempt to execute a query that is not part of the transaction inside of a transaction?
const session = client.startSession();
await session.withTransaction(async () => {
const coll1 = client.db('mydb1').collection('foo');
const coll2 = client.db('mydb2').collection('bar');
// Important:: You must pass the session to the operations
await coll1.insertOne({ abc: 1 } ); // does not have the session object
await coll2.insertOne({ xyz: 999 }, { session });
}, transactionOptions);
const client = new MongoClient(uri);
await client.connect();
await client
.db('mydb1')
.collection('foo');
const session = client.startSession();
const transactionOptions = {
readPreference: 'primary',
readConcern: { level: 'local' },
writeConcern: { w: 'majority' }
};
try {
await session.withTransaction(async () => {
const coll1 = client.db('mydb1').collection('foo');
await coll1.updateOne({user_id: 12344, paid: false }, { $set: { paid: true } } { session });
// long running computation after this line.
// what if another query deletes the document inserted above
// before this transaction completes.
await calls_third_party_payment_vendor_api_to_process_payment();
}, transactionOptions);
} finally {
await session.endSession();
await client.close();
}
What if the update document inside the transaction is simultaneously updated from an outside query before the transaction is committed?
What you have described is a transaction/operation conflict and
the operation blocks behind the transaction commit and infinitely retries with backoff logic until MaxTimeMS is reached
I wrote a report on MongoDB transactions, containing some examples in NodeJS too. if you are interested on the subject, I recommend you the paragraphs WiredTiger Cache and What happens when we create a multi-document transaction?
am devloping a chat app with flutter and want to know which is best practice when creating a new room should i create a new topic and subscribe to this topic from the client app , or to use message trigger with devices token using sendtodevice method and sendmulticast
exports.messageTrigger = functions.firestore.document('/Messages/{messageID}').onCreate(
async (snapshot, context) => {
var currentRoomUsers = snapshot.data().members;
currentRoomUsers.forEach( (userID) => {
db.collection('Users').doc(userID).get().then(async(doc)=>{
if(doc.exists){
const message = {
notification: {
title: `New message from ${snapshot.data().room}`,
body: snapshot.data().body
},
data: {
click_action: 'FLUTTER_NOTIFICATION_CLICK'
},
tokens: doc.data()['Device_token'],
android: {
priority: "high"
},
priority: 10
}
const response2 = await admin.messaging().sendMulticast(message);
}else {
console.log("No such document!");
}
}).catch((error)=>{
console.log("Error getting document:", error);
});
}
);
i think there is a better way rather than what am actually doing maybe subscribing to topics but how to create a new topic and subscribe users when creating a new room
I have a problem with my transaction timeouts. The timeout always takes around 9 seconds independent of how I set the timeout parameter.
final db = FirebaseFirestore.instance;
final docref = db.collection('appdata').doc(docID);
final data = {'rueckruf': FieldValue.serverTimestamp()};
db.runTransaction((transaction) async {
transaction.update(docref, data);
}, timeout: Duration(seconds: 3)).then((value) {
//transaction did work do sth.
}, onError: (e) {
//transaction did not work do sth.
});
I am trying to find a similar solution to what node-rdkafka does for committing individual messages on success.
In node-rdkafka I was able to do call consumer.commit(message); after message processing succeeded. What is the equivalent in KafkaJS?
I have so far tried to call consumer.commitOffsets(...) inside eachMessage handler, but that didn't seem to commit.
I have code like this:
import { Kafka, logLevel } from 'kafkajs';
const kafka = new Kafka({
clientId: 'qa-topic',
brokers: [process.env.KAFKA_BOOTSTRAP_SERVER],
ssl: true,
logLevel: logLevel.INFO,
sasl: {
mechanism: 'plain',
username: process.env.KAFKA_CONSUMER_SASL_USERNAME,
password: process.env.KAFKA_CONSUMER_SASL_PASSWORD
}
});
const consumer = kafka.consumer({
groupId: process.env.KAFKA_CONSUMER_GROUP_ID
});
const run = async () => {
// Consuming
await consumer.connect()
await consumer.subscribe({ topic: 'my-topic', fromBeginning: true });
await consumer.run({
autoCommit: false,
eachMessage: async ({ topic, partition, message }) => {
try {
await processMyMessage(message);
// HOW DO I COMMIT THIS MESSAGE?
// The below doesn't seem to commit
// await consumer.commitOffsets([{ topic: 'my-topic', partition, offset:message.offset }]);
} catch (e) {
// log error, but do not commit message
}
},
})
}
I figured out how to do it. Can't use eachMessage handler, but instead use eachBatch which allows for more flexibility in control in how messages are committed
const run = async () => {
await consumer.connect();
await consumer.subscribe({ topic: 'my-topic', fromBeginning: true });
await consumer.run({
eachBatchAutoResolve: false,
eachBatch: async ({ batch, resolveOffset, isRunning, isStale }) => {
const promises = [];
logger.log(`Starting to process ${batch.messages?.length || 0} messages`);
for (const message of batch.messages) {
if (!isRunning() || isStale()) break;
promises.push(handleMessage(batch.topic, batch.partition, message, resolveOffset));
}
await Promise.all(promises);
},
});
};
Then inside handleMessage commit only those messages that succeeded
const handleMessage = async (topic, partition, message, resolveOffset) => {
try {
....
//Commit message if successful
resolveOffset(message.offset);
} catch(e) {
...
// Do not commit
}
As the documentation states: You can call consumer.commitOffsets only after consumer.run. You are trying to call it from within the run method, that is why it's not working for you.
Keep in mind that committing after each message increases the network traffic.
If that is a price you are willing to pay you can configure the auto-commit to take care of that for you by setting the autoCommitThreshold to 1.