Server Swift with mongodb manager singleton - swift

I am working on a project using Vapor and Mongodb.
Let's say that at a specific route
drop.get("user", String.self) { request, user in
// ... query Mongodb
}
I want to query the database and see if an input user already exists.
Is it wise to have a singleton MongoManager class that handles all the connection with the database?
drop.get("user", String.self) { request, user in
MongoManager.sharedInstance.findUser(user)
}
Do I create a bottleneck with this implementation?

No, you will not create a bottleneck unless you have a single-threaded mechanism that stands between your Vapor Handler and MongoDB.
MongoKitten (the underlying driver for Swift + MongoDB projects) manages the connection pool internally. You can blindly fire queries at MongoKitten and it'll figure out what connection to use or will create a new one if necessary.
Users of MongoKitten 3 will use a single connection per request. If multiple requests are being handled simultaneously, additional connections will be opened.
Users of MongoKitten 4 will use a single connection for 3 requests, this is configurable. The connection pool will expand by opening more connections if there are too many requests are being done.
Users of the upcoming Meow ORM (which works similar to what you're building) will use a single connection per thread. The connection pool will expand if all connections are reserved.

Related

Can I reuse an connection in mongodb? How this connections actually work?

Trying to do some simple things with mongodb my mind got stuck in something that feels kinda strange for me.
client = MongoClient(connection_string)
db = client.database
print(db)
client.close()
I thought that when make a connection it is used only this one along the rest of the code until the close() method. But it doesn't seem to work that way... I don't know how I ended up having 9 connections when it supposed to be a single one, and even if each 'request' is a connection there's too many of them
For now it's not a big problem, just bothers me the fact that I don't know exactly how this works!
When you do new MongoClient(), you are not establishing just one connection. In fact you are creating the client, that will have a connection pool. When you do one or multiple requests, the driver uses an available connection from the pool. When the use is complete, the connection goes back to the pool.
Calling MongoClient constructor every time you need to talk to the db is a very bad practice and will incur a penalty for the handshake. Use dependency injection or singleton to have MongoClient.
According to the documentation, you should create one client per process.
Your code seems to be the correct way if it is a single thread process. If you don't need any more connections to the server, you can limit the pool size by explicitly specifying the number:
client = MongoClient(host, port, maxPoolSize=<num>).
On the other hand, if the code might later use the same connection, it is better to simply create the client once in the beginning, and use it across the code.

Meteor - using snychronised non-persistent / in-memory MongoDB on the server

in a Meteor app, having real-time reactive updates between all connected clients is achieved with writing in collections, publishing and subscribing the right data. In normal case this means also database writes.
But what if I would like to sync particular data which does not need to be persistent and I would like to save the overhead of writing in the database ? Is it possible to use mini-mongo or other in-memory caching on the server by still preserving DDP synchronisation to all clients ?
Example
In my app I have a multiple collapsed threads and I want to show, which users currently expanded particular thread
Viewed by: Mike, Johny, Steven ...
I can store the information in the threads collection or make make a separate viewers collection and publish the information to the clients. But there is actually no meaning in making this information persistent an having the overhead of database writes.
I am confused by the collections documentation. which states:
OPTIONS
connection Object
The server connection that will manage this collection. Uses the default connection if not specified. Pass the return value of calling DDP.connect to specify a different server. Pass null to specify no connection.
and
... when you pass a name, here’s what happens:
...
On the client (and on the server if you specify a connection), a Minimongo instance is created.
But If I create a new collection and pass the option object with conneciton: null
// Creates a new Mongo collections and exports it
export const Presentations = new Mongo.Collection('presentations', {connection: null});
/**
* Publications
*/
if (Meteor.isServer) {
// This code only runs on the server
Meteor.publish(PRESENTATION_BY_MAP_ID, (mapId) => {
check(mapId, nonEmptyString);
return Presentations.find({ matchingMapId: mapId });
});
}
no data is being published to the clients.
TLDR: it's not possible.
There is no magic in Meteor that allow data being synced between clients while the data doesn't transit by the MongoDB database. The whole sync process through publications and subscriptions is triggered by MongoDB writes. Hence, if you don't write to database, you cannot sync data between clients (using the native pub/sub system available in Meteor).
After countless hours of trying everything possible I found a way to what I wanted:
export const Presentations = new Mongo.Collection('presentations', Meteor.isServer ? {connection: null} : {});
I checked the MongoDb and no presentations collection is being created. Also, n every server-restart the collection is empty. There is a small downside on the client, even the collectionHanlde.ready() is truthy the findOne() first returns undefined and is being synced afterwards.
I don't know if this is the right/preferable way, but it was the only one working for me so far. I tried to leave {connection: null} in the client code, but wasn't able to achieve any sync even though I implemented the added/changed/removed methods.
Sadly, I wasn't able to get any further help even in the meteor forum here and here

Mongoose connection that is rarely used (keep alive?)

In my applicaiton I have a DB to which the code will periodically connect, but it will be quite rarely use (maybe once a day/week).
Can I create just connect while module (app) init and then use it across the module while application run lifecycle?
var conn = mongoose.createConnection(process.env.SOME_DB)
I'm not sure should I have a keep alive option as suggested in mongoose docs:
options.server.socketOptions = options.replset.socketOptions = { keepAlive: 1 };
mongoose.connect(uri, options);
or standard auto reconnect feature will be enough?
An i'm also not what is "long running applications"? Actually any real-time service is long running application, should keep alive be enabled for all such services in production?
Also not sure what are Connection pools and how they can affect.
There is a reference to this in the Mongoose documentation:
http://mongoosejs.com/docs/connections.html
And yes, it's generally a good idea.
Also in that document Connection pools are explained. But generally speaking, Mongoose is keeping several socket connections open to the server/replica-set/mongos instances rather than one to allow concurrent processing of requests. Yes, even with async call-backs on IO there is wait time, so Connection pools allow another channel to talk on while one is busy.
And yes, it's generally a good idea.

What things should I consider when using System.Transactions in my EF project?

Background
I have both an MVC app and a windows service that access the same data access library which utilizes EntityFramework. The windows service monitors certain activity on several tables and performs some calculations.
We are using the DAL project against several hundred databases, generating the connection string for the context at runtime.
We have a number of functions (both stored procedures and .NET methods which call on EF entities) which because of the scope of data we are using are VERY db intensive which have the potential to block one another.
The problem
The windows service is not so important that it can't wait. If something must be blocked, the windows service can. Earlier I found a number of SO questions that stated that System.Transactions is the way to go when setting your transaction isolation level to READ UNCOMMITTED to minimize locks.
I tried this, and I may be misunderstanding what is going on, so I need some clarification.
The method in the windows service is structured like so:
private bool _stopMe = false;
public void Update()
{
EntContext context = new EntContext();
do
{
var transactionOptions = new System.Transactions.TransactionOptions();
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var transactionScope = new System.Transactions.TransactionScope( System.Transactions.TransactionScopeOption.Required, transactionOptions))
{
List<Ent1> myEnts = (from e....Complicated query here).ToList();
SomeCalculations(myEnts);
List<Ent2> myOtherEnts = (from e... Complicated query using entities from previous query here).ToList();
MakeSomeChanges(myOtherEnts);
context.SaveChanges();
}
Thread.Sleep(5000); //wait 5 seconds before allow do block to continue
}while (! _stopMe)
}
When I execute my second query, an exception gets thrown:
The underlying provider failed on Open.
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please
enable DTC for network access in the security configuration for MSDTC using the
Component Services Administrative tool.
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I can assume that I should not be calling more than one query in that using block? The first query returned just fine. This is being performed on one database at a time (really other instances are being run in different threads and nothing from this thread touches the others).
My question is, is this how it should be used or is there more to this that I should know?
Of Note: This is a monitoring function, so it must be run repeatedly.
In your code you are using transaction scope. It looks like the first query uses a light weight db transaction. When the second query comes the transaction scope upgrades the transaction to a distributed transaction.
The distributed transaction uses MSDTC.
Here is where the error comes, by default MSDTC is not enabled. Even if it is enabled and started, it needs to be configured to allow a remote client to create a distributed transaction.

ADO.NET SqlData Client connections never go away

An asp.net application I am working on may have a couple hundred users trying to connect. We get an error that the maximum number of connections in the pool has been reached. I understand the concept of connection pools in ADO.NET although in testing I've found that a connection is left "sleeping" on the ms sql 2005 server days after the connection was made and the browser was then closed. I have tried to limit the connection lifetime in the connection string but this has no effect. Should I push the max number of connections? Have I completely misdiagnosed the real problem?
All of your database connections must either be wrapped in a try...finally:
SqlConnection myConnection = new SqlConnection(connString);
myConnection.Open();
try
{
}
finally
{
myConnection.Close();
}
Or...much better yet, create a DAL (Data Access Layer) that has the Close() call in its Dispose method and wrap all of your DB calls with a using:
using (MyQueryClass myQueryClass = new MyQueryClass())
{
// DB Stuff here...
}
A few notes: ASP.NET does rely on you to release your Connections. It will release them through GC after they've gone out of scope but you can not count on this behavior as it may take a very long time for this to kick in - much longer than you may be able to afford before your connection pool runs out. Speaking of which - ASP.NET actually pools your connections to the database as you request them (recycling old connections rather than releasing them completely and then requesting them anew from the database). This doesn't really matter as far as you are concerned: you still must call Close()!
Also, you can control the pooling by using your connection string (e.g. Min/Max pool size, etc.). See this article on MSDN for more information.