Hi i am using mongodb and deploying it in AWS. But the data is not properly being saved in the server.
I created many collections but the data is not present inside the collection.
Do i need any other setting . Please let me know
The database named READ_ME_TO_RECOVER_YOUR_DATA suggests that you created the mongod server without authentication, and some hackers were able to steal/delete all of your data, and are probably now expecting you to pay some bitcoin to get it back.
I doubt they actually made a backup of your data before deleting it, since they don't actually care about you or your data.
There was a blog post from the MongoDB folks a couple of years ago about how to avoid this: https://www.mongodb.com/blog/post/update-how-to-avoid-a-malicious-attack-that-ransoms-your-data
#1 recommendation is to enable authentication.
Related
Now that I have a real device (LWM2M -using wakaama implementation) running and sending data to Orion (I can confirm from server log, 'Observers created successfully'), I want to proceed with historical data storage.
I am not sure how to start. Using docker-compose file to start all services. I already have postgres image pulled and running. Would like to use it for persistence.
I guess I need to create the db schema to use for storage, any link to cygnus-postgres installation/persistence would be appreciated.
Should anyone be looking for how to go about similar situation as I did (above refers), I found the explanations here the customisations section very helpful, please read it.
I’ve been working with AEM for over a year now and lately I’ve been trying to move into a high availability setup for author.
My problem is when ever I spin up a server, add sites, and spin up another server the data doesn’t persist to the new instance. I know why this doesn’t work in the traditional setup (repository is stored locally on the file system). However, I’ve attempted using the S3 backend, and it results in the same problem where the data doesn’t persist onto the new instance.
Ive read about using mongoMK (https://helpx.adobe.com/experience-manager/6-3/sites/deploying/using/recommended-deploys.html), I.e. mongodb as a store, but they also recommended using S3 as the backend.
My question is, does anyone have any experience with multiple AEM author instances sharing the same data and node stores, if so do you have any suggestions as to how to get this working or resources where I can read about this?
After further research it seems the only option for backend clustering is to use mongodb. My attempts to use mongodb with AEM as a backend have failed. When I attempt to use the crx3 and crx3mongo run modes it looks like AEM hangs after opening a connection to mongo. I have verified that nothing is getting placed into the DB via a show dbs returning 0.000GB for the corresponfing database.
I started to work in a Symfony project and we use mongodb for session storage. In the code there are not call to save session, but in production we have a mongodb database of 150MB which in my opinion is so bigger. The problem is that expires query are really slow.
I have some questions:
What kind of information is store in session database if in our code there are not calls to save any kind of information. Probably, is used by Symfony to store some internal info.
My second question, if how to improve slow queries in order to no be stucked for session query.
i want to dockerize my production application. I've got MongoDB set up on server and I want to remove it, and make a docker container with MongoDB which will work on existing data. I already tested this approach, so i created a docker container which storage is pointed to host storage with existing data. Basically it's the new MongoDB instance which work on data created by previous mongoDB which existed on Host. And it works, so i can query data, my application can connect to this database and so on. My question is, what are the threats to this approach? And if this is even good approach, or when i created new mongoDB instance should i import dump data from previous one ?
I guess there's no right and wrong in this case. It depends on how you want to have it working.
Let's say you left Mongodb running in the cloud.
Is it a development database? If yes, how would you keep coding / testing without access to that?
This question has been asked multiple times, here and here, and the answer to get this working is fairly straight forward: add an environmental variable to your bash_profile and all Meteor instances on your localhost will share that MONGO_URL.
What I've noticed however is that while this may be the case, there's quite a bit of latency in the "reactivity" of Meteor. I've tested this with two very lean Meteor apps, with empty collections. Inserting a document to a collection from one Meteor app, where my second app is querying that same collection and printing out a field from the documents does work, but there's a noticeable lag before it updates. I've ruled out the possibility of the collection insertion being the source of the lag (simple console.log callback on the client of the first app, logging the id of the newly inserted document).
My purpose for having multiple apps (two to be precise) sharing the same MongoDB is to separate an admin panel from a mobile app without going crazy regarding name-spacing and bloat. This configuration works, but I'm not sure it's the "proper" way of accomplishing the task, and it certainly seems to be causing a performance hit.
Any insight into this matter would be appreciated. Thank you!
EDIT: To clarify, the db URL I'm using is on my localhost, and isn't something hosted online.
When you use an external database, by default meteor will use periodic polling (every few seconds) in order to observe any changes. The delay you are experiencing is a result of this polling process. You can remove the delay and reduce your app's CPU usage by taking advantage of meteor's oplog tailing feature. In order to use it you will:
Get access to a mongodb instance with the oplog turned on.
Set the environment variable MONGO_OPLOG_URL so your app(s) can read the oplog.
Personally, I'd recommend compose.io for this. They provide exactly this as part of their basic elastic deployment. See this post for detailed instructions.
For users who wish to connect to the oplog created locally for you, you can obtain the URL via:
MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle._oplogUrl
It should end up looking something like mongodb://127.0.0.1:3001/local