I have been using MLab MongoDB and mongoose library to create a db connection inside a serverless (Lambda) handler. It works smoothly on local machine. But sometimes it doesn't work after deployment.The request returns an Internal server error. The weird thing is sometimes it works. But If I remove the database connection code, the handler works. The serverless log just says Process exited before completing request. No real errors so no idea what to do.
The db connection looks like this:
handler.js
// Connect to database
mongoose.connect(process.env.DATABASE_URL, {
useMongoClient: false
}).then((ee) => {
console.log('------------------------invoke db ', ee);
})
.catch(err => console.error('-----------error db ', err));
No error in here too. Any idea what's happening?
When you get Process exited before completing request, it means that the node process has crashed before Lambda was able to call callback. If you go to Cloudwatch logs, there would be an error and stack trace of what happened.
You should connect to the MongoDB instance inside your handler and before you call callback(), disconnect first.
It would be like this...
exports.handler = (event, context, callback) => {
let response;
return mongoose.connect(process.env.DATABASE_URL, {
useMongoClient: false
}).then((ee) => {
// prepare your response
response = { hello: 'world' }
}).then(() => {
mongoose.disconnect()
}).then(() => {
// Success
callback(null, response)
}).catch((err) => {
console.error(err);
callback(err);
})
};
Here is an article explaining with details how lambda work with node and an example of how to implement DB connection.
Differently of #dashmug suggested, you should NOT disconnect your DB since connecting every time will decrease your performance.
Related
THE PROBLEM
I have a NodeJs server where I'm using jest for testing. In case of integration tests i get a message after all tests where passing:
"Jest has detected the following X (15-20) open handles potentially keeping Jest from exiting"
I knew what this means, have already seen it when i was using Sequelize as ORM, but now I'm using Slonik.
I found this topic what was really useful:
https://github.com/gajus/slonik/issues/63
so when i set the idleTimeOut as advised it is solved.
test("foo", async () => {
const slonik = createPool(
`postgres://postgres:password#127.0.0.1:7002/postgres`,
{
maximumPoolSize: 1,
minimumPoolSize: 1,
idleTimeout: 1 // milliseconds!
}
);
await slonik.many(
sql`SELECT table_name FROM information_schema.tables WHERE table_schema='public';`
);
});
I tried to solve this problem from an other perspective. My ide was to close the connection in the afterAll block of jest. The test setup is:
const db = container.resolve(DbConnectionPool).connection;
let appServer;
let api;
beforeEach(() => {
appServer = app.listen(config.port);
api = supertest(appServer);
});
afterEach(async () => {
appServer.close();
await db.query(sql`TRUNCATE reports`);
});
What i have tried in the afterAll block:
afterAll(async () => {
await db.end()
});
It does not solves my problem as the documentation tells:
'Note: pool.end() does not terminate active connections/ transactions.'
I do not found anything about how to enforce the close of a pool until now.
So thought i can be tricky and i will close the connection using SQL:
afterAll(async () => {
await db.query(sql`DISCONNECT ALL`);
});
Does not work as well.
I still had an idea to play with. Slonik documentation tells, that the default idleTimeOut for a connection is 5000ms in default.
So i have tried to set a timeout in the afterAll block with 6000 ms, but i still get the warning from Jest.
So does anyone have any idea how to force-close the connection for my tests?
I'm working with the HttpService module from Nest.js to make the HTTP calls. I'm able to download an image from https://unsplash.com; when there is no network interruptions the code is working as expected.
This is the code I have for making the download call and start writing into the desired file
const urlDownload = 'https://unsplash.com/photos/xiie4XeSzTU/download?force=true';
let response = await this.httpService.get(urlDownload, {
responseType: 'stream'
}).toPromise();
response.data.pipe(writer);
And this is the code where I'm trying to handle the possible events of the writer and returning a response
let downloadFile = path.resolve(__dirname,'../../files/landscape.jpg');
let writer = fs.createWriteStream(downloadFile);
return new Promise((resolve, reject) => {
writer.on('finish', ()=>{
resolve('Image downloaded');
});
writer.on('error', ()=>{
reject('Image downloaded failed');
});
});
I'm deliberately turning off the wifi during the download to try the server response with Image downloaded failed (what I have in the writer error handler), but instead I'm getting an 500 statusCode, internal server error. When I go to the Nest console to whatch the error it appears
[Nest] 11220 - 2020-05-22 18:16:45 [ExceptionsHandler] getaddrinfo ENOTFOUND unsplash.com +439536ms
Error: getaddrinfo ENOTFOUND unsplash.com
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:64:26)
How can I solve this and catch correcty the network error from Nest to return a friendly message?
I could solve it. I let it here with the hope of helping somebody in the future.
It is not firing the error handler function because that handler is attached to the writter, and there is not writting error, it just stops writing because the cut of the connection but that is not an error.
I re-writed the response variable to stop being a promise and better I started treating it like an observer.
let response = this.httpService.get(urlDownload, {
responseType: 'stream',
});
And then it is the response in previus Promise format
return new Promise((resolve, reject) => {
writer.on('error', () => {
resolve('error due to, possibly, an unexisting file path');
})
response.subscribe({
next(response) { response.data.pipe(writer) },
error(err) {
console.error('More details: ' + err);
resolve('Error in the download :/')
},
complete() { resolve('Completed'); }
})
});
I'm not using the reject function of the promise but it is perfectly doable
I have been doing some research and I'm not able to find a good answer about using Knex JS within a Lambda function:
How do I use Knex with AWS Lambda? #1875
Serverless URL Shortener with Apex and AWS Lambda
Use Promise.all() in AWS lambda
Here is what I have in my index.js:
const knex = require('knex')({
client: 'pg',
connection: {...},
});
exports.handler = (event, context, callback) => {
console.log('event received: ', event);
console.log('knex connection: ', knex);
knex('goals')
.then((goals) => {
console.log('received goals: ', goals);
knex.client.destroy();
return callback(null, goals);
})
.catch((err) => {
console.log('error occurred: ', err);
knex.client.destroy();
return callback(err);
});
};
I am able to connect and execute my code fine locally, but I'm running into an interesting error when it's deployed to AWS - the first call is always successful, but anything after fails. I think this is related to the knex client being destroyed, but then trying to be used again on the next call. If I re-upload my index.js, it goes back to working for one call, and then failing.
I believe this can be resolved somehow using promises but this my first time working with Lambda so I'm not familiar with how it's managing the connection to RDS on subsequent calls. Thanks in advance for any suggestions!
For me, it worked on my local machine but not after deploying. I was kind of be mislead.
It turns out the RDS inbound source is not open to my Lambda function. Found solution at AWS Lambda can't connect to RDS instance, but I can locally?: either changing RDS inbound source to 0.0.0.0/0 or use VPC.
After updating RDS inbound source, I can use Lambda with Knex successfully.
The Lambda runtime I am using is Node.js 8.10 with packages:
knex: 0.17.0
pg: 7.11.0
The code below using async also just works for me
const Knex = require('knex');
const pg = Knex({ ... });
module.exports.submitForm = async (event) => {
const {
fields,
} = event['body-json'] || {};
return pg('surveys')
.insert(fields)
.then(() => {
return {
status: 200
};
})
.catch(err => {
return {
status: 500
};
});
};
Hopefully it will help people who might meet same issue in future.
The most reliable way of handling database connections in AWS Lambda is to connect and disconnect from the database within the invocation process itself.
In your code above, since you disconnected already after the first invocation, the second one does not have a connection anymore.
To fix it, just move your instantiation of knex.
exports.handler = (event, context, callback) => {
console.log('event received: ', event);
// Connect
const knex = require('knex')({
client: 'pg',
connection: {...},
});
console.log('knex connection: ', knex);
knex('goals')
.then((goals) => {
console.log('received goals: ', goals);
knex.client.destroy();
return callback(null, goals);
})
.catch((err) => {
console.log('error occurred: ', err);
// Disconnect
knex.client.destroy();
return callback(err);
});
};
There are ways to reuse an existing connection but success rates for that varies widely depending on database server configuration and production load.
I got the exact same issue as you said: Used destroy() in an AWS lambda function (like this: await knex.destroy() at the bottom) and suddenly all my AWS lambdas were in error.
Because I did not suspect it, I searched for hours what was causing the issue and even started to investigate using lambda + vpc + nat etc.. Turns out it's just that AWS freezes lambda in a way that if you destroy the connection, on the next handler invocation it will try to reuse the connection.
Solution: do not use .destroy() at the end of lambda and redeploy.
I am using sails-mongo with Sails v0.12.13 and realize that if mongodb stops, when I do some request to the controller (using curl) the request get stucked for a while and then returns no error in the application logs. I just get the message on the curl console:
curl: (52) Empty reply from server
Here is my code snippet:
model.find()
.then((result) => {
options.status = true;
return callback(null, getResponse(null, name, options));
})
.catch((err) => {
options.status = false;
return callback(err, getResponse(err, name, options));
});
Tried to put logs into the catch blocks but nothing was printed.
Am I missing something or is there a problem with sails-mongo adapter on dealing with a connection loss?
I have a mongoose schema with a unique field and I am trying to write a backend (express) integration test which checks that POSTing the same entity twice results in HTTP 400. When testing manually behaviour is as excpected. Automatic testing however requires a wait:
it('should not accept two projects with the same name', function(done) {
var project = // ...
postProjectExpect201(project,
() => {
setTimeout( () => {
postProjectExpect400(project, done);
},100);
}
);
});
The two post... methods do as named and the code above works fine, but if the timeout is removed, BOTH requests receive HTTP 200 (though only one entity created in the database).
I'm new to those technologies and I'm not sure what's going on. Could this be a mongodb related concurrency issue and if so how should I deal with it?
The database call looks like this:
Project.create(req.body)
.then(respondWithResult(res, 201))
.catch(next);
I already tried connecting to mongodb with ?w=1 option btw.
Update:
To be more verbose: Project is a mongoose model and next is my express error handler which catches the duplicate error.
The test functions:
var postProjectExpect201=function(project, done, validateProject) {
request(app)
.post('/api/projects')
.send(project)
.expect(201)
.expect('Content-Type', /json/)
.end((err, res) => {
if (err) {
return done(err);
}
validateProject && validateProject(res.body);
done();
});
};
var postProjectExpect400=function(project, done) {
request(app)
.post('/api/projects')
.send(project)
.expect(400)
.end((err, res) => {
if (err) {
return done(err);
}
done();
});
};