What are the differences (CPU, runtime, or otherwise) between using pg pool and pgp as a database connection in an express server - postgresql

I've created a few apps which utilize a postgres database, but in all of those projects, I've either used the pool or client function from the pg npm package. Recently I came across the pg-promise node package, and was just wondering if there were any drawbacks to using pg-promise over pool or client. I'm just worried about changes in runtime that would affect how many clients the app could service at one time.

pg-promise is "Built on top of node-postgres". You're still using the same pools and clients.
Nothing changes regarding the amount of connections your database will be able to handle, and unless you use a different approach to building your application (like, using transactions instead of not using transactions, or using individual clients instead of pooling), nothing will change regarding the amount of clients your app will be able to serve.

Related

Should I be worried about this amount of connections?

Recently I hosted my database on MongoDB Atlas. My API is hosted on Vercel and it’s built with Next.js so my api routes are serverless functions. I’m using this code for database connection. (it’s official code suggested by Next.js team).
However at peak times when a lot of users use the website at the same time I can see up to 35 active database connections (many read/write operations). Is this something normal? Shouldn’t connection count always be 1?
Thank you so much for your help!
Connection amount picture
As you're dealing with a serverless solution any concurrent requests will be served from different instances, so you'll have more connections than you would if you had a well configured server based solution.
As a result you will not be sharing the connection to the database between concurrent connections.
How sequential connections are handled can vary, I'm not certain how it works with Vercel, but with AWS Lambda you can reuse a connection in a subsiquent request.

MongoDB connection overhead on client side

We are evaluating different alternatives for multi-tenancy in our platform. We think that one database per customer is the way to go as data structure and requirements are completely different from one customer to another, and we want to keep them as isolated as possible.
However we are facing the question of how to manage the connection to multiple databases. We don't want to have one app instance per customer. Instead we want to have a pool of app instances handling requests for all our customers and use the correct database depending on the customer.
Our concern is if keeping connections open to many (maybe thousands) of database will cause a performance issue. We are actually worried about memory usage, so we are wondering what's the overhead on client side when performing a connection to the MongoDB server.
Also we are thinking about moving the database access to a different service, which is going to be responsible of handling the database connection for all customers. In this case, is there an existing tool that allows to do that kind of "multiplexing" of MongoDB databases?
Some additional notes:
We discarded sharding. It won't fit our needs. We need different databases.
Databases will be in different servers with reserved resources. This means all databases run its own mondod process and we need different connections.
We use Java driver.

convert always connected to occasionally connected application

I have an existing client-server 3-tier application with the following stack :
Smart-Client (Win-Forms)
IIS/ASP.NET
Sql server
Some of the data is stored in Entity–attribute–value (EAV) model.
All primary keys are integer identity columns.
Database operations are mostly performed using Stored procedures.
I am tasked with converting this application into an occasionally connected application (OCA)
There should be no issues with installation and resources limitation on the clients.
This is the first such project for me.
I have done some reading about
ms sync framework
Enterprise library / occasionally connected smart clients
SQL server replication
In order preserve existing code and limiting change impact, I am considering installing the the 3-tier application on each client , using sync framework to handle synchronization on the WS to handle synchronization. Also having one master server to which synchronizations will refer.
Does this solution look feasible?
Are there any other resources regarding converting an on always Connected 3-tier application to an occasionally connected application ?
Thank you .
should be feasible. not much change in your app. you just have to install a local database on your clients.
however, your're using identity columns. unless you partitioned your identity values (client 1 is 1-1000, client 2 is 1001 - 2000, etc...) you will duplicate IDs when you upload them.
have a look at this: Database Sync:SQL Server and SQL Express N-Tier with WCF

How to connect meteor to an existing backend?

I recently discovered Meteor, and I really love the simplicity that it brings to programming new apps. My question is: how do you connect it to an existing back-end? We have a substantial amount of existing Clojure code, also running with MongoDB. What I would like to do is use Meteor to build the front-end of my app. I guess I could connect my Meteor app directly to the MongoDB instance of the back-end, but this does not seem like a good practice... or is it?
Another option I imagined was to access the DB from either the webapp or the Clojure code and create a separate way of communication between the two with a queue mechanism, or sockets. Any hint or pointer to relevant documentation would be helpful!
Take a look at Meteor's environment variable settings. By setting these variables you can easily define an external MongoDB instance. In particular it would be
$export MONGO_URL="mongodb://yourmongodbserver/your-db"
There is a screencast of eventedmind.com for this specific topic https://eventedmind.com/feed/sg3ejYnmhxpBNoWan which is quite resourceful.
Regarding the "how" to point them to the same, #Michael's answer is spot on; just point your Meteor web servers at the same MongoDB.
Regarding whether or not you should, that depends on your situation. Having everything run off the same DB certainly simplifies things.
Having separate dbs can potentially reduce the load on your db tier as you could selectively choose which writes/updates to replicate between the clojure and Meteor dbs.
One issue with either method is speed of notification of changes. Currently, Meteor servers poll the DB every 10 secs to recognize changes. Happily, once the oplog branch gets merged into master, it will give a large speed improvement in how quickly external changes made in the DB (as opposed to directly through a Meteor server) are reflected in the Meteor clients. The oplog support will enable Meteor servers to emulate a replica-set instance, tailing the oplog which will mean practically instant notification of db changes.
Using a queue as a middle-ware layer introduces complexity and adds another point of failure. It also increases latency of notification. These issues can be mitigated, though, and there may be other pieces of your infrastructure in the future that would benefit from such a middle-ware queue. For example, other interested systems could register with the queue to receive notification of changes without querying or needing to know about your db. You can also scale your MongoDB instances independently and tune the queue to determine what "eventually" means in the "eventually consistent" guarantee.
I think the questions to ask are:
how much overlap is there between the clojure dataset and the Meteor dataset
how quickly do you need changes to be reflected between the two
will a middle-ware queue be useful in other circumstances as you grow
Regarding possible queue technologies to look into, I've heard very good things about RabbitMQ. The Oct. 2013 talk at the Clojure NYC meetup included a description of switching to RabbitMQ from Amazon SQS due to latency issues with SQS and anecdotally RabbitMQ has been rock-solid for them.

Pros and cons of external database services (mongohq, etc.)

For putting together a site from scratch, what are the advantages and disadvantages of using external database services, e.g. MongoHQ, Amazon RDS?
Advantage: you don't have to fix it yourself when it breaks.
Disadvantage: you can't fix it yourself when it breaks.
My take on this is simple:
If you have an application hosted on Amazon then you should go for Amazon RDS or MongoHQ(which also is hosted on Amazon). The rational is, since both your application and the database are on the same network (internally) you will get a significant performance advantage.
If your application is hosted else-where then go for a local install.
a couple more points
for
do not have to administer the Hardware
I guess they take care of security, software updates of the server (Software admin)
saves room. you do not have to find room in your building for a database cluster
disadvantages
depending on your internet speed, the speed you transfer data can be affected. if the application and data are in the same network you could say that you have 1gbit speed vs a 50mbit internet connection. times this by 1000 concurrent users?
you have to work to their release schedule. if you use a 3rd party and they update the database version which has a breaking change. You will be forced you update. if you host it yourself this upgrade will be under your terms.