I’ve been working with AEM for over a year now and lately I’ve been trying to move into a high availability setup for author.
My problem is when ever I spin up a server, add sites, and spin up another server the data doesn’t persist to the new instance. I know why this doesn’t work in the traditional setup (repository is stored locally on the file system). However, I’ve attempted using the S3 backend, and it results in the same problem where the data doesn’t persist onto the new instance.
Ive read about using mongoMK (https://helpx.adobe.com/experience-manager/6-3/sites/deploying/using/recommended-deploys.html), I.e. mongodb as a store, but they also recommended using S3 as the backend.
My question is, does anyone have any experience with multiple AEM author instances sharing the same data and node stores, if so do you have any suggestions as to how to get this working or resources where I can read about this?
After further research it seems the only option for backend clustering is to use mongodb. My attempts to use mongodb with AEM as a backend have failed. When I attempt to use the crx3 and crx3mongo run modes it looks like AEM hangs after opening a connection to mongo. I have verified that nothing is getting placed into the DB via a show dbs returning 0.000GB for the corresponfing database.
Related
I'm looking to deploy moodle in the cloud however I have some 50 odd sites which require access to this moodle possibly even temporarily offline. So I'm looking into replicating moodle down onto each site. From what I understand there are 2 data stores that require replication, moodledata and the database, postgresql in our case. moodledata if I'm not mistaken contains the multimedia data and the database among other things all the user records. Luckily the multimedia data will be centralized and is thus synched only one way down to the nodes, that seems doable. Where I'm stuck is how do I handle the Postgres database where the sync will need to be bidirectional?
What I mean by witable is that you can CRUD on each database, and it automatically syncs with the other so that all of them are synced all the time (as much as possible).
I want to start a project for a company with some tricks.
The company is present in many locations (at least 5) and wants the app to run locally (with local database), but when there's a change(Create Update or Delete), the change is propagated to the other databases.
The goal is to have them all synced at every moment, but with the possibility that if internet connection is lost on one site, they continue to use the app properly since they are actually connected to the local database. That's why they don't one a totally online database.
They use MongoDB.
I saw the replica sets technology, but since it's with a unique master, it seems complicated.
Please can you share solutions to such a situation?
Hi i am using mongodb and deploying it in AWS. But the data is not properly being saved in the server.
I created many collections but the data is not present inside the collection.
Do i need any other setting . Please let me know
The database named READ_ME_TO_RECOVER_YOUR_DATA suggests that you created the mongod server without authentication, and some hackers were able to steal/delete all of your data, and are probably now expecting you to pay some bitcoin to get it back.
I doubt they actually made a backup of your data before deleting it, since they don't actually care about you or your data.
There was a blog post from the MongoDB folks a couple of years ago about how to avoid this: https://www.mongodb.com/blog/post/update-how-to-avoid-a-malicious-attack-that-ransoms-your-data
#1 recommendation is to enable authentication.
I’m building a new web application which needs to work seamlessly even when there is no internet connection. I’ve selected Angular and am building a PWA as it comes with built-in functionality to make the application work offline. So far, I have the service worker working perfectly and driven by the manifest file, this very nicely caches the static content and I’ve set it to cache a bunch of API requests which I want to use whilst the application is offline.
In addition to this, I’ve used localStorage to store attempts to invoke put, post and delete API requests when the user is offline. Once the internet connection is re-established, the requests stored in localStorage are sent to the server.
This far in my proof of concept, the user can access content whilst offline, edit data and the data gets synced with the server once the user’s internet connection is re-established. This is where my quandary begins though. There is API request data cached automatically by the service worker as defined in the manifest file, and there is a separate store of data for data edits whilst offline. This leads to a situation where the user edits some data, saves the data, refreshes the page and the data is served by the service worker cached API.
Is there a built in mechanism to update API data cached automatically by the service worker? I don’t fancy trying to unpick this manually as it seems hacky and I can’t imagine it’ll be future proof as service workers evolve.
If there isn’t a standard way to achieve what I need to do, is it common for developers to take full control of offline data by storing it all in IndexedDB/localStorage manually? i.e. I could invoke API requests and write some code which caches the results in a structured format in IndexedDB to form an offline database, then writes back to the offline database whenever the user edits some data, and uploads any data edits when the user is back online. I don’t envisage any technical problems with doing this, it just seems like a lot of effort to achieve something which I was hoping to be standard functionality.
I’m fairly new to developing with Angular, but have many years of other development experience. So please forgive me if I’m asking obvious questions, I just can’t seem to find a good article on best practices for working with data storage with service workers.
Thanks
I have a project where my users can edit local data when they are offline and I use Cloud Firestore to have a local database cached available. If I understood you correctly, this would be exactly your requirement.
The benefit of this solution is that with just one line of code, you get not only a local db, but also all the changes made offline are automatically synchronised with the server once the client gets online again.
firebase.firestore().enablePersistence()
.catch(function(err) {
// log the error
});
// Subsequent queries will use persistence, if it was enabled successfully
If using this NoSQL database is an option for you I would go with it, otherwise you need to implement the local updates by yourself as there is not a built in solution for that.
i want to dockerize my production application. I've got MongoDB set up on server and I want to remove it, and make a docker container with MongoDB which will work on existing data. I already tested this approach, so i created a docker container which storage is pointed to host storage with existing data. Basically it's the new MongoDB instance which work on data created by previous mongoDB which existed on Host. And it works, so i can query data, my application can connect to this database and so on. My question is, what are the threats to this approach? And if this is even good approach, or when i created new mongoDB instance should i import dump data from previous one ?
I guess there's no right and wrong in this case. It depends on how you want to have it working.
Let's say you left Mongodb running in the cloud.
Is it a development database? If yes, how would you keep coding / testing without access to that?