#hapijs/lab: how to prevent creation multiple DB connections when testing the server? - hapi

I develop an API server with Hapi. I use #hapi/lab for testing. I have different test files for different API modules/routes.
There is a server initialization in each test file. A DB connection is created when the server is initialized, consequently, multiple DB connections are created at the same time when I try to test the server. And I got warning like that:
WARNING: Creating a duplicate database object for the same connection.
at Object.register (/home/.../node_modules/hapi-pg-promise/index.js:19:20)
at internals.Server.register (/home/.../node_modules/#hapi/hapi/lib/server.js:453:35)
at async Object.exports.compose (/home/.../node_modules/#hapi/glue/lib/index.js:46:9)
at async createServer (/home/.../server.js:10:115)
at async Immediate.<anonymous> (/home/.../node_modules/#hapi/lab/lib/runner.js:662:17)
So, is there an approach, how to test a Hapi server in multiple files without multiple server connections?

You will have to lazy load the servr. let's assume you have a function serverFactory which returns the server. you could do this
let serverFactoryObj;
function getServerFactory() {
if (!serverFactoryObj) {
serverFactoryObj = serverFactory();
}
return serverFactoryObj;
}
This way, no matter how many times you test the endpoints, you will always have a single instance of server. Ideally you should always test with single instance rather than creating/stopping server for each test.

Related

How to handle multiple mongo databases connections

I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..

Azure SQL Server (elastic pool) connect slow

We ar running a webapplication in Azure Web Apps using a database per customer (multiple accounts per customer). When logging in we connect the user to the correct customer database. This database is also hosted in azure (an elastic pool). It is hosted in the same region (West Europe) as the Web App.
Once the connection is pooled, request times are fast, but the first time a user log's in, the connection still needs to be created an this takes (quiet) a long time.
The connectionstring is build up using a SqlConnectionStringBuilder.
var csb = new System.Data.SqlClient.SqlConnectionStringBuilder();
csb.DataSource = "tcp:******.database.windows.net,1433";
csb.InitialCatalog = "***-***-***";
csb.UserID = "**-**";
csb.Password = "**********";
csb.MultipleActiveResultSets = true;
csb.Encrypt = true;
csb.PersistSecurityInfo = false;
csb.TrustServerCertificate = false;
csb.ConnectTimeout = 30;
_connectionString = csb.ConnectionString;
// Data Source=tcp:******.database.windows.net,1433;Initial Catalog=***-***-***;Persist Security Info=False;User ID=**-***;Password=******;MultipleActiveResultSets=True;Connect Timeout=30;Encrypt=True;TrustServerCertificate=False
Am I doing anything wrong? Or are there some settings in azure to speed up the connect process?
The above request shows the first request to the application of a customer. It therefor includes the EF Migration Seed resulting in the first 2 queries not actually going to the database itself and quite a lot of queries (not all shown here) to the database.
Well, I solved my problem eventualy. Seems i was matching wrong queries within Applications Insights. I installed Stackify and this gives just the little bit more information I needed.
Seem's Entity Framework does some things with the 'master' database. As the user in the connectionstring did not have access to the 'master' database it throws an error. Well, handling that error take's up quite some time on the app-service used and therefor returning slow. It just doesn't fail.
What EF tries to do is determine if the database exist by querying the master database wich is faster then connecting to a non existing database. If it fails because it can not connect to the master database, EF just tries to connect to the database itself. If connection to the database works, it continues normal execution like the seed method.

Server Swift with mongodb manager singleton

I am working on a project using Vapor and Mongodb.
Let's say that at a specific route
drop.get("user", String.self) { request, user in
// ... query Mongodb
}
I want to query the database and see if an input user already exists.
Is it wise to have a singleton MongoManager class that handles all the connection with the database?
drop.get("user", String.self) { request, user in
MongoManager.sharedInstance.findUser(user)
}
Do I create a bottleneck with this implementation?
No, you will not create a bottleneck unless you have a single-threaded mechanism that stands between your Vapor Handler and MongoDB.
MongoKitten (the underlying driver for Swift + MongoDB projects) manages the connection pool internally. You can blindly fire queries at MongoKitten and it'll figure out what connection to use or will create a new one if necessary.
Users of MongoKitten 3 will use a single connection per request. If multiple requests are being handled simultaneously, additional connections will be opened.
Users of MongoKitten 4 will use a single connection for 3 requests, this is configurable. The connection pool will expand by opening more connections if there are too many requests are being done.
Users of the upcoming Meow ORM (which works similar to what you're building) will use a single connection per thread. The connection pool will expand if all connections are reserved.

how to create single instance of jersey Client for multiple request

Hi I am here using jersey.1.19.1 API Client for Rest full web service.
I came to know that creating an instance of Client is an expensive.
As I am creating instance of it every time i call the web service,
response is delayed which it leads affecting the performance.
So is there any other way in creating instance of Client for multiple request.
and also how to over come in delay even for creating single instance of a Client
Is it a right approach to create a Client object pooling as that of connection pooling or one Client object for one user. Even is there any other best way for creating Client object
As there is no much info provided to have a clue on your structure, something along these lines will help
Create an instance variable of the jersey client
private Client client = null;
In a method that returns client, check if that field is null, if it is initialise it, otherwise return same instance
if(client == null)
client = ClientBuilder.newClient();
return client;

Re-using sessions in ScalaQuery?

I need to do small (but frequent) operations on my database, from one of my api methods. When I try wrapping them into "withSession" each time, I get terrible performance.
db withSession {
SomeTable.insert(a,b)
}
Running the above example 100 times takes 22 seconds. Running them all in a single session is instantaneous.
Is there a way to re-use the session in subsequent function invocations?
Do you have some type of connection pooling (see JDBC Connection Pooling: Connection Reuse?)? If not you'll be using a new connection for every withSession(...) and that is a very slow approach. See http://groups.google.com/group/scalaquery/browse_thread/thread/9c32a2211aa8cea9 for a description of how to use C3PO with ScalaQuery.
If you use a managed resource from an application server you'll usually get this for "free", but in stand-alone servers (for example jetty) you'll have to configure this yourself.
I'm probably stating the way too obvious, but you could just put more calls inside the withSession block like:
db withSession {
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
}
Alternately you can create an implicit session, do your business, then close it when you're done:
implicit val session = db.createSession
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
session.close