When my server starts up, it will create a connection to a mongo database, grab that *mgo.Session and store it in a "server" struct for handlers defined on that struct to use to serve requests.
I see two ways to actually execute this.
1) Save the `*mgo.Session` in the struct.
The upside of this is that you can call Session.Copy in each handler before using the session to have connection isolation. The downside is that you need to call .DB("x").C("y") for a specific x and y for each handler. If you wanted to change this, you need to find each instance where you're using it and change it. That's less than ideal.
2) Store the `*mgo.Database` or even `*mgo.Collection` object on the server struct.
The upside is that you can configure it in one place and use it. The downside is that there appears to be no way to use Copy() on the thing for connection isolation.
How do you recommend storing the mongo session so that you can cleanly use the connection and provide isolation between requests?
Related
I've a Golang based micro-service which has an in-memory cache as follows:
Create object -> Put it in cache -> Persist
Update object -> Update the cache -> Persist
Get -> Get it from the cache
Delete -> Delete cache entry -> Remove from data store.
On a service re-start, the cache is populated from the data store.
The cache organizes the data in different ways that matches my access patterns.
Note that one client can create the object, and other clients can update it at a later point in time.
Everything works fine as long as I've one replica. But, this pattern will break when I increase the replica count in my deployment.
If I have to go to the DB for each GET, it defeats the purpose of the cache. The first thought is, to move the cache out. But, this seems like a fairly common problem when moving to multi-replica microservices. So, curious to understand alternatives.
Thanks for your time.
Mainly many things depends on how you structure your application.
One common solution is use Redis Cache or Distributed Cache. Here advantage is that your all services will go to same cache to manage object. This will give more consistent data.
Another approach that you can take and this will be some how more complex. Try to use sharding.
For Get Operation based on Id of object, you have to route request to specific instance. That instance will have that object in cache. If not then it read from db and put it in that instance cache. Eachtime for that object it will go that instance. This is applicable to Update and Delete operation.
For create operation.
If you want DB generate Id automatically for object then there is once chance object created in DB and then it return that Id and based on Id you have to route request and that way for first access after creation will be from DB but after that it will be in cache of that instance.
If you have provision that Id can be manually generated then during creation if you have to prefix Id with something that map to instance.
Note : In distributed system , there is no one solution. You always have to decide which approach works for you scenario.
This question is for pg-promise, its recommended usage pattern & based on following assumption,
It does-not make sense to create more than a single pgp instance, if they are connecting to same DB(also enforced by the good warning message of "Creating a duplicate database object for the same connection.")
Given:
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them, which leads to the warning of duplicate connection object and is fair as they both talk to same DB and there is a possibility for optimisation here(since i am in control of those packages).
Then: To prevent this, i thought of implementing dependency injection, for which i pass a resolve function in libraries constructor which gives them the DB connection object.
Issue: There are some settings which are at top level like parsers and helpers and transaction modes which may be different for each of these packages what is the recommendation for such settings or is there a better patterns to address these issues.
EG:
const pg = require('pg-promise');
const instance = pg({"schema": "public"});
instance.pg.types.setTypeParser(1114, str => str);//UTC Date which one library requires other doesnt
const constring = "";
const resolveFunctionPackage1 = ()=>instance(constring);
const resolveFunctionPackage2 = ()=>instance(constring);
To sum up: What is the best way to implement dependency injection for pg-promise?
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them
That is a serious design flaw, and it's is never gonna work well. Any independent package that uses a database must be able to reuse an existing connection pool, which is the most valuable resource when it comes to connection usage. Head-on duplication of a connection pool inside an independent module will use up existing physical connections, and hinder performance of all other modules that need to use the same physical connection.
If a third-party library supports pg-promise, it should be able to accept instantiated db object for accessing the database.
And if the third-party library supports the base driver only, it should at least accept an instantiated Pool object. In pg-promise, db object exposes the underlying Pool object via db.$pool.
what happens when they want to set conflicting typeparsers?
There will be a conflict, because pg.types is a singleton from the underlying driver, so it can only be configured in one way. It is an unfortunate limitation.
The only way to avoid it, is for reusable modules to never re-configure the parsers. It should only be done within the actual client application.
UPDATE
Strictly speaking, one should avoid splitting a database-access layer of an application into multiple modules, there can be a number of problems to follow that.
But specifically for separation of type parsers, the library supports setting custom type parsers on the pool level. See example here. Note that the update is just for TypeScript, i.e. in JavaScript clients it has been working for awhile.
So you still can have your separate module create its own db object, but I would advise that you limit its connection pool size to the minimum then, like 1:
const moduleDb = pgp({
// ...connection details...
max: 1, // set pool size to just 1 connection
types: /* your custom type parsers */
});
in a Meteor app, having real-time reactive updates between all connected clients is achieved with writing in collections, publishing and subscribing the right data. In normal case this means also database writes.
But what if I would like to sync particular data which does not need to be persistent and I would like to save the overhead of writing in the database ? Is it possible to use mini-mongo or other in-memory caching on the server by still preserving DDP synchronisation to all clients ?
Example
In my app I have a multiple collapsed threads and I want to show, which users currently expanded particular thread
Viewed by: Mike, Johny, Steven ...
I can store the information in the threads collection or make make a separate viewers collection and publish the information to the clients. But there is actually no meaning in making this information persistent an having the overhead of database writes.
I am confused by the collections documentation. which states:
OPTIONS
connection Object
The server connection that will manage this collection. Uses the default connection if not specified. Pass the return value of calling DDP.connect to specify a different server. Pass null to specify no connection.
and
... when you pass a name, here’s what happens:
...
On the client (and on the server if you specify a connection), a Minimongo instance is created.
But If I create a new collection and pass the option object with conneciton: null
// Creates a new Mongo collections and exports it
export const Presentations = new Mongo.Collection('presentations', {connection: null});
/**
* Publications
*/
if (Meteor.isServer) {
// This code only runs on the server
Meteor.publish(PRESENTATION_BY_MAP_ID, (mapId) => {
check(mapId, nonEmptyString);
return Presentations.find({ matchingMapId: mapId });
});
}
no data is being published to the clients.
TLDR: it's not possible.
There is no magic in Meteor that allow data being synced between clients while the data doesn't transit by the MongoDB database. The whole sync process through publications and subscriptions is triggered by MongoDB writes. Hence, if you don't write to database, you cannot sync data between clients (using the native pub/sub system available in Meteor).
After countless hours of trying everything possible I found a way to what I wanted:
export const Presentations = new Mongo.Collection('presentations', Meteor.isServer ? {connection: null} : {});
I checked the MongoDb and no presentations collection is being created. Also, n every server-restart the collection is empty. There is a small downside on the client, even the collectionHanlde.ready() is truthy the findOne() first returns undefined and is being synced afterwards.
I don't know if this is the right/preferable way, but it was the only one working for me so far. I tried to leave {connection: null} in the client code, but wasn't able to achieve any sync even though I implemented the added/changed/removed methods.
Sadly, I wasn't able to get any further help even in the meteor forum here and here
Due to the Meteor Docs there are 'server-side', 'client-side' and 'local' Collections. Is there a way to change the 'status' (e.g. if it's server-side, client-side or local) on a running app?
Use Case: A Web-Application where users can register and login. They can store sensible data. Depending on the Users personal preferences he should be able to choose if that data is stored local or on the server (General decision - not from case to case).
Current Approach: It's working fine if I either instantiate the Collection local CollectionName = new Mongo.Collection(null); or server side CollectionName = new Mongo.Collection('collectionName');.
But I can't think of an approach to make it possible to the user that he can change the Collection status.
Is there a way to do this?
Or is a workaround needed (e.g. Create both, a local and server-side Collaction, and just decide which to use for insert/update/find - what would mean a lot of duplicate code?!).
Edit: To make thinks clear: I want the user to be able to choose if his data is stored in a collection which is synced with the server or a collection without any syncing.
No, you can't change the type of a collection on a running app.
I think you are confused about what these terms mean. "Client-side" collections aren't permanently stored in localstorage. It just means it's a collection that's in the browser's memory. Just as "server-side" collections are those that reside in the server's memory. The difference is not how it's defined, but where the code runs. Most collections have a client-side and a server-side counterpart, and they are kept synchronized via pub/sub. Server-side collections are also synchronized with MongoDB (using the oplog).
Local collections can live in both places, but "local" means they aren't synchronized with anything.
I probably don't fully understand what you are trying to do, but local collections do not persist data.
If you pass null as the name, then you’re creating a local collection. It’s not synchronized anywhere; it’s just a local scratchpad that supports Mongo-style find, insert, update, and remove operations. (On both the client and the server, this scratchpad is implemented using Minimongo.)
This means any data added to them on the client will be blown away when the user closes their browser (unless you are also using one of the local collection persist meteor packages) and any data added to them on the server will be blown away when the meteor app is restarted. So I don't think you really want to use local collections.
Instead, I would use a regular collection (where a name is passed to the constructor) and either the standard allow or deny options (not really recommended anymore...but still a valid approach) or Meteor methods (the preferred approach) to control who can change data and what data is allowed to change.
Or, another option could be to pass your publication function a list of fields that the user wishes to see on the client for that given session. To do this you defined a new publication that receives a displayFields argument that you then use as the field specifier options in your collection .find().
Meteor.publish("userData", function (userId, displayFields) {
// validate the structure and contents of displayFields
// retrieve the data but only use the fields that the user requested
return UserData.find({user_id: userId}, {fields: displayFields});
});
Then on the client side you would subscribe to this and pass in the fields the user wishes to make visible on the client.
var displayFields = {
firstname: 1,
lastname: 0,
//...
};
this.subscribe("userData", [displayFields]);
I have a classic spray+slick http server which is my database access layer, and I'd like to be able to have an healthcheck route to ensure my server is still able to reach my DB.
I could do it by doing a generic sql query, but I was wondering if there was a better way to just check the connection is alive and usable without actually adding load on the database (or at least the minimum possible load).
So pretty much :
val db = Database.forConfig("app.mydb")
[...]
db.???? // Do the check here
Why do you want to avoid executing a query against the database?
I think the best health check is to actually use the database as your application would (actually connecting and running a query). With that in mind, you can perform a SELECT 1 against your DB, and verify that it responds accordingly.