How to backup really huge mongoDB database [closed] - mongodb

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have to backup a huge mongoDB - it's something around 400GB of data.
When I tried copy db from the console it has gone like forever to execute (I understand it is huge amount of data but after 1 day of waiting it looked like hung-up process)
Anyone tried backup/migrate that big mongoDB database? Any tips?
At the moment I don't need to care about slowing down the server etc. so every kind of tips are welcome.

There are various alternative provided by mongoDB in order to backup Mongo data.
1. Mongodump tool
Mongodump can be used to back up the live server data . This can be used to take backup of entire cluster /server /database/collection even while data base is up and running . This can be used if you have less data write activity .
Pros :
Simple and quite easy to use .
No need to shutdown server instance.
Cons :
Can slow down the server while dump is going on.
It is real time backup , not point in time .
Not good alternative if your schema is linked schema .
Chances are there to miss some data present in some other linked collection.
2. shutdown and backup
This is very simple approach where you simple shutdown your server the mongo server and copy mongo data directory to some other backup directory. This is fastest way to do backup . The only disadvantage is that you have to stop server instance .
For more reference you can refer documentation : http://docs.mongodb.org/manual/core/backups/

Related

Export outlook data to local drive and then delete off exchange server [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 days ago.
Improve this question
A user of mine has 99% full disk space. They need their files for future reference but need to get rid of them somehow.
I want to export all Outlook data from 1/1/2022 and older, save it to their OneDrive, and then delete what I just exported from the exchange server.
Whats the most efficient way of doing that?
I tried archiving but I learned that makes a copy of the data and keeps it on the server.
I tried doing an export, but that appears to be almost the same as archiving but just not a "live" version.
I tried manually searching a date range, moving them to a folder, and then deleting them, but that was going to take FOREVER because of how long it took to load.
A user of mine has 99% full disk space.
In that case you can limit the size of cached data in Outlook or just consider using the non-cached mode where all the data will be retrieved from the Exchange server online. For example:
You can read more about that in the Managing Outlook Cached Mode and OST File Sizes article.

Share same disk with different servers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I am not very exerienced in Posgresql and I did not find a solution to this question yet.
I have data in a database stored in a cloud space (onedrive). I can manage them by my Windows 10 installation of Postgres 12. I have also a laptop with linux (manjaro) installed and Postgresql server.
I would like to understand if I can access to the same cloud data with both servers. Consider that since I am a single user, data access is never concurrent. I use only one server per time.
I read it is possible to share the same data in:
https://www.postgresql.org/docs/12/different-replication-solutions.html
Howverer, I cannot find a detailed procedure. Any suggestion ? Probably there is a better solution ? Thanks
Your question is pretty unclear; I'll try my best to straighten you out.
First, a database doesn't access data, it holds data. So your desire to have two database servers access the same data does not make a lot of sense.
There are three options that come to mind:
You want to hold only one copy of the data and want to access it from different points: in that case you don't need a server running on your laptop, only a database client that accesses the database in the cloud.
This is the normal way to operate.
You want a database on your laptop to hold a cooy of the data from the other database, but you want to be able to modify the copy, so that the two database will diverge: use pg_dump to export the definitions and data from the cloud database and restore the dump locally.
You want the database on the laptop to be and remain a faithful copy of the cloud database: if your cloud provider allows replication connections, set up your laptop database as a streaming replication standby of the cloud database.
The consequence is that you only have read access to the local cooy of the database.

Is combining MongoDB with Neo4J a good practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I already have a .Net Web project running on MongoDB where I store some news/feed data.
After a while I needed a faster way to track "who shared what" and "how to find relationships depending on these information".
Then I came up with an idea to use graphDB to track related feeds and users.
Since the system is already running on MongoDB, I am thinking of leaving the data in Mongo and creating the graph representation in Neo4J for applying a graph search.
I do not want to migrate all my data to Neo4J because many people telling me MongoDB's I/O performance is way better than Neo4J and they also pointed out Sharding feature.
What would you suggest in this situation?
And If I follow my idea, will it be a good practice?
Personnally I think there are no unique answer and best practices. It is common usage to use polyglot persistence systems.
Now everything is based on your context and there are points we can't just reply for you :
How much time do you have (learning a new technology is not a matter of days until you can use it in production and sleep good )
How much money you can invest in the project , sharding is, AFAIK, a neo4j enterprise feature and licenses have a cost if you're not opensource or commercial company. Also hosting costs for Neo4j in cluster mode.
How much data ? As long as your graph can fit in memory, you'll not run I/O issues.
Now, away from these points, yes you can in a first instance trying to map neo4j on top of mongoDB.
Maybe try to do incremental migrations, and at then end of the process, maybe ask you the following questions, WHY do you need MongoDB to handle graph structures ?

Incremental Upload data from sql server to Amazon Redshift [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Our Raw Data is in SQL SERVER,Data keep on growing,Growth rate is high,We need to load the data incrementally to Redshift for Analytics.Can you please point out me a good practice to load the data.How feasible with SSIS to load directly to Redshift (with out S3).
It's going to be impractical to load from SQL Server to Redshift without landing the data on S3. You can try loading via SSH but, of course, SSH on Windows is not well supported.
http://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-remote-hosts.html
I did a presentation a while back on what we discovered when migrating from SQL Server to Redshift. One of the things we discovered was that SSIS was not very useful for interacting with AWS services.
http://blog.joeharris76.com/2013/09/migrating-from-sql-server-to-redshift.html
Finally, you could look into some of the commercial "replication" tools that automate the process of incrementally updating Redshift from an on-premise database. I hear good things about Attunity.
http://www.attunity.com/products/attunity-cloudbeam/amazon-redshift

Are there any good tutorials on linking an iPhone app to a database? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have built an iPhone app that needs to pull data from a server. First I need to figure out what kind of server I will need. I'm a bit familiar with MySQL but was wondering if anyone had better suggestions on a backend. My app will have two tables that need to be populated by data residing on a server. Also, a user can submit data to this database which in turns populates the previously mentioned tables.
If you have any tips or tutorials, code snippets, etc. it would be really appreciated!
*Edit: I should mention this IS a remote database
Well if it's a remote server your thinking about then you should be looking into implementing some kind of service architecture like Web Services over SOAP or REST.
If the data stays on the iPhone than by all means use SQLite as it's fast and lightweight.
I would recommend starting with SQLite and local data first. Then when you have your main program logic, flow and UI complete, replacing the SQLite with Web Service calls.
Your database sounds really simple with just two tables. Perhaps you don't even need a SQL database. Perhaps a Berkeley DB would be sufficient. It has atomic commit, transactions, rollback etc. However you store data as key-value pairs.
However I am not sure if you can use it as a server. You could consider accessing the data through something like remote objects. This is done very easily in Cocoa. You could build a Berkeley DB on your sever that hosts the data as distributed objects that your iPhone app connects to.