I'm trying to sync data between a real MongoDB db on a remote server and a local storage on the client. and I've came across this lib called Minimongo
Here is what were aiming for:
were trying to sync small portion of e.g. a document, so that the user can have something to work with while in the client and not been pinging mongo at every moment...
Then if the user closes the browser and logs back in have a cached part of the document and be able to connect to mongo to see if the document has change or not. And if that's the case restore it from the remote mongoDB instance.
This question is also relevant and similar for what were trying to achieve
So how can we proceed with this workflow using minimongo or if there is another lib/tool like it that can aid in this process ?
Related
I have a large Mongo database (5M documents). I edit the database from an offline application, so I store the database on my local computer. However, I want to be able to maintain an online copy of the database, so that my website can access it.
How can I update the online copy regularly, without having to upload multiple GBs of data every time?
Is there some way to "track changes" and upload only the diff, like in Git?
Following up on my comment:
Can't you store the commands you used on your offline db, and then
apply them on the online db, through a script running on SSH for
instance ? Or even better upload a file with all the commands you ran
on your offline base, to your server and then execute them with a cron
job, or a bash script ? (The only requirement would be for your bases
to have the same start point, and same state, when you execute the
script)
I would recommend to store all the queries you execute on your offline base, to do this you have many options, I can think about the following : You can set the profiling level to log all your queries.
(Here is a more detailed thread on the matter: MongoDB logging all queries)
Then you would have to extract then somehow (grep ?), or store them directly in another file on the fly, when they are executed.
For the uploading of the script, it depends on what you would like to use, but i suppose you would need to do it during low usage hours, and you could automate the task with a CRON job, and an SSH tunnel.
I guess it all depends on your constraints (security, downtime, etc..)
I moved a mongoDB from one atlas cluster to a different account/different cluster.
To do this I did a dump from the source db and a restore to the new account's cluster.
I did NOT have a problem restoring the db - that went fine - I can visually confirm that the hashes in the new db ARE the same as the old.
When I try to login to my app (pointed to the source) I get in fine, when I change my db setting and point to the new db I get a log in failed.
The api code is the same - running locally, the only thing that is different is the connection string.
I am using bcrypt to hash the passwords - but because the api is sitting on my local machine, that kind of takes any application layer variable out of my problem list.
With the exception of the connection string - I was using the 3.1 driver connection string to connect to the 'old' version, and I decided to try the 3.6 driver version to connect to the 'new'.
Can someone confirm that moving a db from one cluster to another, using the dump and restore method SHOULD not effect hashed password matching.??
And maybe offer suggestions on where to look for answers?
so the only difference on the code is this:
// Old
DB_URI=mongodb://u***:p***#dev0-shard-00-00-1xxx.mongodb.net:27017,dev0-shard-00-01-1xxx.mongodb.net:27017,dev0-shard-00-02-1xxx.mongodb.net:27017/db?ssl=true&replicaSet=Dev0-shard-0&authSource=admin
// new
DB_URI=mongodb+srv://n***:h***#prod-xxx.mongodb.net/test?retryWrites=true
Ok, So I finally got around to playing with stuff, and since the URI was the only change, I switched back to the 3.4 driver syntax (that long ungodly string), and it works fine.
For the record, all my "open" (non-authenticated) API calls, such as with signup, or requesting a forgotten password. and a slew of drop down lookup, all processed thru the api with the 3.6 driver, I also signed up and logged in fine - the only issue is logging in with an account that was created in the previous cluster, with the new driver connection string.
And as confirmation - now that I switched the connection string back to 3.4 - I cannot log into the account I created with the 3.6 connection string.
I have been using Wowza Stream Engine for content streaming and actually used MySQL for storing logs coming from wowza streaming with the help of log4j MySQL definitions. Before using MySQL, I utilised from the instructions in official wowza web site. The link is below:
https://www.wowza.com/forums/content.php?130-How-to-log-to-a-mySQL-database
However, because of the fact that MySQL became slower day to day, (sometimes even crashing) while wowza streaming logs coming and accumulating on DB (millions) ; I intended to move the DB Log system to MongoDB. In accordance with this, I used below log4j mongodb statements in order to work it just like in the MySQL DB.
log4j.appender.MongoDB=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.MongoDB= com.mongodb.jdbc.MongoDriver
log4j.appender.MongoDB.hostname=localhost
log4j.appender.MongoDB.port= 27017
log4j.appender.MongoDB.Driver=org.mongodb.mongodb-driver
log4j.appender.MongoDB=org.log4mongo.MongoDbAppender
log4j.appender.MongoDB.databaseName=primarydb
log4j.appender.MongoDB.collectionName=wowza_log
log4j.appender.MongoDB.layout=org.log4mongo.MongoDbPatternLayout
log4j.appender.MongoDB=primarydb.wowza_log.insert({server_ip= {server_ip}, date= {date}, time= {time}, ...}
Moreover, needed MongoDB setup and Service setup processes have also been accomplished correctly.
Consequently, I have set up RoboMongo so as to see and observe the collection ('wowza_log') created by wowza streaming. However, after starting a sample mp3 with wowza, the connection seems to be set, but there are no collection named wowza_log created, nothing happens in MongoDB as I see from RoboMongo. I got stuck at this point and wondered if there are some people who can help me to get rid of this problem.
as some of you might aware about the shutting down of parse service in about a year, i am following the migration process as per their tutorials. However, i am not able to migrate these data from parse to local database(i.e. mongodb).
I've started the mongodb instanse locally on 27017, and also created an admin user as part of migration based on these tutorials. Reference-1 & Reference-2.
But when i try to migrate the data from parse developer console, i get No Reachable Servers or Network Error & i don't understand why. I have doubt in the Connection string that i use for this but i am not sure, please find the following image.
I am new to mongodb so don't have much idea about this, your little help would be greatly appreciated.
Since the migration tool runs at parse.com, the tool needs to be able to access your MongoDB instance over the Internet.
Since you're using a local IP (192.168.1.101), parse.com cannot connect to your IP and the transfer will time out.
Either you need to make your MongoDB reachable from the Internet, or you can - as they do in their guide - use an existing MongoDB service.
I am working on Mongodb authorization.
I added users and am using mongod --auth while connecting to the database so that only authorized users are able to see the database.
Right now, mongo db can only be able to access throught vpn.
Suppose if a hacker breaks into the server machine, he can close the existing mongod connection(which was running with security using --auth) and can start a new connection without authentication mode after which he can see all the data of the database.
How can we secure database so that everytime it asks for the username/password to be provided.
Or some other ways to prevent this.
Thanks.
If he breaks into the server machine, he won't restart mongo. He would simply copy the mongo database and open it on his own machine, without using mongo at all.
If the attacker has the control of a server running process P1, P2, ... each Pi has to be considered breached, including theirs data.
The exception is strong isolation (i.e. virtual machines) and crypto; if the application crypts all its data with a key whose generation is not fully automated (i.e. a passphrase to be inserted on the startup, a challenge/response the administrator needs to pass during the boot, etc ...) this may prevent the attacker from getting all the bits to decrypt it. Otherwise, if the application is able to encrypt and decrypt without any human help, the attacker is able to do it as well.
Those things do not apply to mongo, that does not have support for stuff like that. Good old SQLs have it but they are not trendy any more ;)
On the specific user: are you afraid they will break into as mongodb or as another user? Because if they get the user foo, they still may have problems in accessing mongodb (data or process) if local permissions are well set. But again, people tend to consider the local privilege escalation (i.e. moving from foo to root and then to mongodb) something that happens when someone breaches. In roughly 100 pentests I managed to get access to a machine, probably just once or twice I could not escalate.