I started to work in a Symfony project and we use mongodb for session storage. In the code there are not call to save session, but in production we have a mongodb database of 150MB which in my opinion is so bigger. The problem is that expires query are really slow.
I have some questions:
What kind of information is store in session database if in our code there are not calls to save any kind of information. Probably, is used by Symfony to store some internal info.
My second question, if how to improve slow queries in order to no be stucked for session query.
Related
Hi i am using mongodb and deploying it in AWS. But the data is not properly being saved in the server.
I created many collections but the data is not present inside the collection.
Do i need any other setting . Please let me know
The database named READ_ME_TO_RECOVER_YOUR_DATA suggests that you created the mongod server without authentication, and some hackers were able to steal/delete all of your data, and are probably now expecting you to pay some bitcoin to get it back.
I doubt they actually made a backup of your data before deleting it, since they don't actually care about you or your data.
There was a blog post from the MongoDB folks a couple of years ago about how to avoid this: https://www.mongodb.com/blog/post/update-how-to-avoid-a-malicious-attack-that-ransoms-your-data
#1 recommendation is to enable authentication.
I am using a web application for doing data entry which has a mechanism for storing the data entry form (which is an html form) in the browser cache IndexDB.
I am able to see the form in the browser dev tool like so :
I want to know for how long the Index DB will be able to store the form in the browser? Is it possible that it is months since the browser cache was same? Will closing the browser clear the keys? or is this persistent enough storage to last for a few months?
Is it possible to find out when(the exact date or time) the cache entry was made in the IndexDB?
I am asking this because I suspect some discperancy in the form for some of our users as the data being sent is a little different than expected.
Any help would is appreciated.
Thanks
DHIS2, the application you are referring to, has an application you and other users can use to clear any cached data. This app is named "Browser Cache Cleaner", and gives you a list of different things to clear. I would try this app and see if your users still have these issues.
Databases don't expose the timestamp of when the database record was last modified. That's something the developer needs make the application to store in the database records. For example, one could have created_at and modified_at columns to track when the record was created and when was it last modified.
IndexedDB is a persistent client storage API, so yes, data will stay permanently unless the user clears the browser's cache.
If there is some discrepancy in the form being sent, I would look at the caching strategy. Offline data caching is a pretty broad topic (also I don't know much about your application), but Google's Offline Cookbook is a good place to start digging in this topic, as long as caching strategies for your use.
I develop a multi tenant application where each tenant has its own mongo db.
All tenants share the same UI.
I should have one mongo db for all users accounts and each mongo db for data.
I'm new in meteor and i would like to know how i can dynamically select the database when i publish the collections.
export const collects = new MongoObservable.Collection('collectionname',{
connection:DDP.connect('urltomongodb')
});
Any help please
As far as I know the DDP utilities are available for people who wish to connect to a Meteor server from a non-Meteor platform, either front end or server.
There is, of course, nothing to stop you using DDP.connect() to connect to another server, but you will also need to manage that connection, and any retries etc if it becomes unavailable.
I would suggest an easier path is to manage all of your data in one database - trying to separate them becomes non-trivial, because it is doing something that Meteor doesn't normally do. If you structure your data accordingly, it should be quite feasible to keep all the data in one database
This question has been asked multiple times, here and here, and the answer to get this working is fairly straight forward: add an environmental variable to your bash_profile and all Meteor instances on your localhost will share that MONGO_URL.
What I've noticed however is that while this may be the case, there's quite a bit of latency in the "reactivity" of Meteor. I've tested this with two very lean Meteor apps, with empty collections. Inserting a document to a collection from one Meteor app, where my second app is querying that same collection and printing out a field from the documents does work, but there's a noticeable lag before it updates. I've ruled out the possibility of the collection insertion being the source of the lag (simple console.log callback on the client of the first app, logging the id of the newly inserted document).
My purpose for having multiple apps (two to be precise) sharing the same MongoDB is to separate an admin panel from a mobile app without going crazy regarding name-spacing and bloat. This configuration works, but I'm not sure it's the "proper" way of accomplishing the task, and it certainly seems to be causing a performance hit.
Any insight into this matter would be appreciated. Thank you!
EDIT: To clarify, the db URL I'm using is on my localhost, and isn't something hosted online.
When you use an external database, by default meteor will use periodic polling (every few seconds) in order to observe any changes. The delay you are experiencing is a result of this polling process. You can remove the delay and reduce your app's CPU usage by taking advantage of meteor's oplog tailing feature. In order to use it you will:
Get access to a mongodb instance with the oplog turned on.
Set the environment variable MONGO_OPLOG_URL so your app(s) can read the oplog.
Personally, I'd recommend compose.io for this. They provide exactly this as part of their basic elastic deployment. See this post for detailed instructions.
For users who wish to connect to the oplog created locally for you, you can obtain the URL via:
MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle._oplogUrl
It should end up looking something like mongodb://127.0.0.1:3001/local
My site runs on MongoDB and I'm using MongoLab as a host for the database. I have 10-15 people using the site at any given time and would rather not 'switch them off' if at all possible.
If I have a MongoDB dump that I'd like to restore, it takes a very long time (the file size is around 360mb and this takes a good while to upload on my connection speed). What's more, it appears that MongoDB wants me to delete my collection before doing a restore, so the user would have no data to look through while it's updating.
Is there any way around this, besides, say, having two mongoLab accounts, one 'active' and one for uploading backups and I switch between the two when I need to do a restore?
Is there a general recommended strategy for this sort of thing?