google-cloud/firestore: Error: 4 DEADLINE_EXCEEDED: Deadline Exceeded while creating document - google-cloud-firestore

I'm trying to create document using following code and getting "Error: 4 DEADLINE_EXCEEDED: Deadline Exceeded" while running ref.set() function. I'm using "ws" and "fastify" and there are few number(100-150) of sockets are getting handled. This error is not coming in initial days. Its coming up after 4-5 days after restarting process on server.
create: async (id) => {
//store socket
return await socketRef.doc(id).set({ id: id, connectedAt: fastify.fsTimestamp() })
},
socketRef = Firestore collection reference
Following is complete error. Not understanding why this happening.
{ Error: 4 DEADLINE_EXCEEDED: Deadline Exceeded
at Object.exports.createStatusError (/root/airsniper.api/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/root/airsniper.api/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/root/airsniper.api/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/root/airsniper.api/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/root/airsniper.api/node_modules/grpc/src/client_interceptors.js:841:24)
code: 4,
metadata: Metadata { _internal_repr: {} },
details: 'Deadline Exceeded' }

As #Ashish mentioned, the answer can be found in another thread and is as follows:
Deadline Exceeded error occurs due to Firestore limit of Maximum write rate to a document - 1 per second.

Related

Trying to use Knex onConflict times out my Cloud Function

I am trying to insert geoJSON data into a PostGIS instance on a regular schedule and there is usually duplicate data each time it runs. I am looping through this geoJSON data and trying to use Knex.js onConflict modifier to ignore when a duplicate key field is found but, it times out my cloud function.
async function insertFeatures() {
try {
const results = await getGeoJSON();
pool = pool || (await createPool());
const st = knexPostgis(pool);
for (const feature of results.features) {
const { geometry, properties } = feature;
const { region, date, type, name, url } = properties;
const point = st.geomFromGeoJSON(geometry);
await pool('observations').insert({
region: region,
url: url,
date: date,
name: name,
type: type,
geom: point,
})
.onConflict('url')
.ignore()
}
} catch (error) {
console.log(error)
return res.status(500).json({
message: error + "Poop"
});
}
}
The timeout error could be caused by a variety of reasons,either it could be transaction batch size your function is processing or connection pool size or database server limitations.Here in your cloud function, check whether when setting up the pool, knex allows us to optionally register afterCreate callback, if this callback is added it is getting positive that you make the call to the done callback that is passed as the last parameter to your registered callback or else no connection will be acquired leading to timeout.
Also one way to see what knex is doing internally is to set DEBUG=knex:* environment variable, before running the code so that knex outputs information about queries, transactions and pool connections while code executes.It is advised that you set batch sizes, connection pool size and connection limits from the database server to match the workload that you are pushing to the server, this ensures the basic timeout issues caused.
Also check for similar examples here:
Knex timeout error acquiring connection
When trying to mass insert timeout occurs for knexjs error
Having timeout error after upgrading knex
Knex timeout acquiring a connection

Sequelize transaction retry doens't work as expected

I don't understand how transaction retry works in sequelize.
I am using managed transaction, though I also tried with unmanaged with same outcome
await sequelize.transaction({ isolationLevel: Sequelize.Transaction.ISOLATION_LEVELS.REPEATABLE_READ}, async (t) => {
user = await User.findOne({
where: { id: authenticatedUser.id },
transaction: t,
lock: t.LOCK.UPDATE,
});
user.activationCodeCreatedAt = new Date();
user.activationCode = activationCode;
await user.save({transaction: t});
});
Now if I run this when the row is already locked, I am getting
DatabaseError [SequelizeDatabaseError]: could not serialize access due to concurrent update
which is normal. This is my retry configuration:
retry: {
match: [
/concurrent update/,
],
max: 5
}
I want at this point sequelize to retry this transaction. But instead I see that right after SELECT... FOR UPDATE it's calling again SELECT... FOR UPDATE. This is causing another error
DatabaseError [SequelizeDatabaseError]: current transaction is aborted, commands ignored until end of transaction block
How to use sequelizes internal retry mechanism to retry the whole transaction?
Manual retry workaround function
Since Sequelize devs simply aren't interested in patching this for some reason after many years, here's my workaround:
async function transactionWithRetry(sequelize, transactionArgs, cb) {
let done = false
while (!done) {
try {
await sequelize.transaction(transactionArgs, cb)
done = true
} catch (e) {
if (
sequelize.options.dialect === 'postgres' &&
e instanceof Sequelize.DatabaseError &&
e.original.code === '40001'
) {
await sequelize.query(`ROLLBACK`)
} else {
// Error that we don't know how to handle.
throw e;
}
}
}
}
Sample usage:
const { Transaction } = require('sequelize');
await transactionWithRetry(sequelize,
{ isolationLevel: Transaction.ISOLATION_LEVELS.SERIALIZABLE },
async t => {
const rows = await sequelize.models.MyInt.findAll({ transaction: t })
await sequelize.models.MyInt.update({ i: newI }, { where: {}, transaction: t })
}
)
The error code 40001 is documented at: https://www.postgresql.org/docs/13/errcodes-appendix.html and it's the only one I've managed to observe so far on Serialization failures: What are the conditions for encountering a serialization failure? Let me know if you find any others that should be auto looped and I'll patch them in.
Here's a full runnable test for it which seems to indicate that it is working fine: https://github.com/cirosantilli/cirosantilli.github.io/blob/dbb2ec61bdee17d42fe7e915823df37c4af2da25/sequelize/parallel_select_and_update.js
Tested on:
"pg": "8.5.1",
"pg-hstore": "2.3.3",
"sequelize": "6.5.1",
PostgreSQL 13.5, Ubuntu 21.10.
Infinite list of related requests
https://github.com/sequelize/sequelize/issues/1478 from 2014. Original issue was MySQL but thread diverged everywhere.
https://github.com/sequelize/sequelize/issues/8294 from 2017. Also asked on Stack Overflow, but got Tumbleweed badge and the question appears to have been auto deleted, can't find it on search. Mentions MySQL. Is a bit of a mess, as it also includes connection errors, which are not clear retries such as PostgreSQL serialization failures.
https://github.com/sequelize/sequelize/issues/12608 mentions Postgres
https://github.com/sequelize/sequelize/issues/13380 by the OP of this question
Meaning of current transaction is aborted, commands ignored until end of transaction block
The error is pretty explicit, but just to clarify to other PostgreSQL newbies: in PostgreSQL, when you get a failure in the middle of a transaction, Postgres just auto-errors any following queries until a ROLLBACK or COMMIT happens and ends the transaction.
The DB client code is then supposed to notice that just re-run the transaction.
These errors are therefore benign, and ideally Sequelize should not raise on them. Those errors are actually expected when using ISOLATION LEVEL SERIALIZABLE and ISOLATION LEVEL REPEATABLE READ, and prevent concurrent errors from happening.
But unfortunately sequelize does raise them just like any other errors, so it is inevitable for our workaround to have a while/try/catch loop.

How to execute different error messages depending on where a query failed in a transaction in pg-promise?

how can I execute varying error messages depending on where a query failed, triggering a rollback, in my transaction?
I'll be using the sample code from the documentation:
db.tx(t => {
// creating a sequence of transaction queries:
const q1 = t.none(query);
const q2 = t.one(query);
const q3 = t.one(query);
// returning a promise that determines a successful transaction:
return t.batch([q1, q2, q3]); // all of the queries are to be resolved;
})
.then(data => {
// success, COMMIT was executed
})
.catch(error => {
// failure, ROLLBACK was executed
});
Preferred output is the following:
if the transaction failed in q1:
res.json({error: true, message:"q1 failed"})
if the transaction failed in q2:
res.json({error: true, message:"q2 failed"})
if the transaction failed in q3:
res.json({error: true, message:"q2 failed"}), etc.
What I'm thinking is using a Switch statement to determine what error message to execute, although I don't have an idea on how to know what query failed in the transaction.
Thank you for your help!
P.S. I recently migrated from node-pg to pg-promise (which is why I'm a bit new with the API) due to having a hard time with transactions as recommended in my previous posts, and yeah, pg-promise made a lot of things easier the 1 day worth of refactoring code is worth it.
Since you are using method batch, you get BatchError thrown when the method fails, which has useful property data, among others:
.catch(err => {
// find index of the first failed query:
const errIdx = err.data.findIndex(e => !e.success);
// do what you want here, based on the index;
});
Note that inside such error handler, err.data[errIdx].result is the same as err.first, representing the first error that occurred.

CloudKit Batch Error: Previous Error in Atomic Zone

I am attempting to save a CKRecord using a CKModifyRecordsOperation and every time I try it, I get this initial error:
["CKErrorDescription": Failed to modify some records,
"CKPartialErrors": {
"CKRecordID: 0x60c000034000; recordName=ABC, zoneID=workspaceZone:DEF" = "CKError 0x60c000257340: \"Batch Request Failed\" (22/2024); \"Record CKRecordID: 0x7fb2f6998a60; recordName=ABC, zoneID=workspaceZone:DEF will not be saved because of previous error in atomic zone\"";
},
"NSDebugDescription": CKInternalErrorDomain: 1011, "NSUnderlyingError": CKError 0x60c000248af0: "Partial Failure" (1011); "Failed to modify some records"; partial errors: {
... 1 "Batch Request Failed" CKError's omited ...
},
"NSLocalizedDescription": Failed to modify some records]
I then parse the individual errors of the batch like this:
if let errorItems = error.partialErrorsByItemID {
for item in errorItems{
if let itemError = item.value as? CKError{
print("::: Individual Error in Batch :::")
print(itemError)
print(":::::")
}
}
}
But all the individual error says is:
CKError(_nsError: CKError 0x60c000257340: "Batch Request Failed" (22/2024); "Record CKRecordID: 0x7fb2f6998a60; recordName=GHI, zoneID=workspaceZone:JKL will not be saved because of previous error in atomic zone")
The CloudKit server log just says it's a BAD_REQUEST which isn't very helpful either.
Is there a way to get more details as to what's wrong with my record?
This just means one of your requests failed. You're doing a batch request with one or more requests. If one fails, CloudKit fails all of the requests to keep things atomic.
So, you should subscribe to errors on each record with perRecordCompletionBlock. Then, you can see which record is failing and why. You should print out the userInfo dictionary of the error for more detailed information.

node.js with PostgreSQL and socket.io Error: Socket is not writable

I've been playing around with socket.io and it seems to have been working nicely. Lately though, I installed postgreSQL to insert rows to the database everytime an event happens (and the event happens 2 or 3 times per second!)... Here's a snippet:
pg.connect(conString, function(err, dbClient) {
io.sockets.on("connection", function(client) {
client.on("clientSendingPlayerData", function(playerData) {
dbClient.query("insert into actions values (1)", function() {
console.log("INSERTED ROW");
});
});
});
});
We get the error:
net.js:391
throw new Error('Socket is not writable');
^
Error: Socket is not writable
at Socket._writeOut (net.js:391:11)
at Socket.write (net.js:377:17)
at [object Object].query (/homes/jjl310/node_modules/pg/lib/connection.js:109:15)
at [object Object].submit (/homes/jjl310/node_modules/pg/lib/query.js:99:16)
at [object Object]._pulseQueryQueue (/homes/jjl310/node_modules/pg/lib/client.js:166:24)
at [object Object].query (/homes/jjl310/node_modules/pg/lib/client.js:193:8)
at Socket.<anonymous> (/homes/jjl310/tradersgame/server.js:182:16)
at Socket.$emit (events.js:64:17)
at SocketNamespace.handlePacket (/homes/jjl310/node_modules/socket.io/lib/namespace.js:335:22)
at Manager.onClientMessage (/homes/jjl310/node_modules/socket.io/lib/manager.js:469:38)
Any ideas as to what's causing the problem and how it could be fixed?
I'm using the following library for postrgreSQL
https://github.com/brianc/node-postgres
pg.connect() gets a connection from the pool, dbClient is an active connection. Long running connections can timeout, error or be dropped for inactivity. You want to move pg.connect() in to the clientSendingPlayerData callback. That way a db connetion gets pull from the pool only when needed and is returned to the pool when finished.