Restore mongodb data if collection or database was dropped - mongodb

I need to know - Is it any possibility to restore data in collection or database if it was dropped?

The OS, by default (or in the case of Windows: any case) will not allow you to restore deleted data. You will need a third party program which can read the sectors. It is also good to note that while database drops will delete the files collection drops will not, instead they get nulled.
Dropping a collection should make it near on impossible to retrieve the data since the hard drive sectors that were used have now been overwritten with new data (basically one pass 0).
So the files may be recoverable on a database drop but that is still questionable.

Related

Best way to backup and restore data in PostgreSQL for testing

I'm trying to migrate our database engine from MsSql to PostgreSQL. In our automated test, we restore the database back to "clean" state at the start of every test. We do this by comparing the "diff" between the working copy of the database with the clean copy (table by table). Then copying over any records that have changed. Or deleting any records that have been added. So far this strategy seems to be the best way to go about for us because per test, not a lot of data is changed, and the size of the database is not very big.
Now I'm looking for a way to essentially do the same thing but with PostgreSQL. I'm considering doing the exact same thing with PostgreSQL. But before doing so, I was wondering if anyone else has done something similar and what method you used to restore data in your automated tests.
On a side note - I considered using MsSql's snapshot or backup/restore strategy. The main problem with these methods is that I have to re-establish the db connection from the app after every test, which is not possible at the moment.
If you're okay with some extra storage, and if you (like me) are particularly not interested in re-inventing the wheel in terms of checking for diffs via your own code, you should try creating a new DB (per run) via templates feature of createdb command (or CREATE DATABASE statement) in PostgreSQL.
So for e.g.
(from bash) createdb todayDB -T snapshotDB
or
(from psql) CREATE DATABASE todayDB TEMPLATE snaptshotDB;
Pros:
In theory, always exact same DB by design (no custom logic)
Replication is a file-transfer (not DB restore). So far less time taken (i.e. doesn't run SQL again, doesn't recreate indexes / restore tables etc.)
Cons:
Takes 2x the disk space (although template could be on a low performance NFS etc)
For my specific situation. I decided to go back to the original solution. Which is to compare the "working" copy of the database with "clean" copy of the database.
There are 3 types of changes.
For INSERT records - find max(id) from clean table and delete any record on working table that has higher ID
For UPDATE or DELETE records - find all records in clean table EXCEPT records found in working table. Then UPSERT those records into working table.

Using Data compare to copy one database over another

Ive used the Data Comare tool to update schema between the same DB's on different servers, but what If so many things have changed (including data), I simply want to REPLACE the target database?
In the past Ive just used TSQL, taken a backup then restored onto the target with the replace command and/or move if the data & log files are on different drives. Id rather have an easier way to do this.
You can use Schema Compare (also by Red Gate) to compare the schema of your source database to a blank target database (and update), then use Data Compare to compare the data in them (and update). This should leave you with the target the same as the source. However, it may well be easier to use the backup/restore method in that instance.

Postgres - Is it necessary to create tablespace in my case?

I have a mobile/web project, using pg9.3 as database, and linux as server.
The data won't be huge, but as time goes on, the data increase.
For long term considering, I want to know about:
Questions:
1. Is it necessary for me to create tablespace for my database, or just use the default one?
2. If I create new tablespace, what is the proper location on linux to create the folder, and why?
3. If I don't create it now, and wait until I have to, till then, will it be easy for me to migrate db with data to new tablespace?
Just use the default tablespace, do not create new tablespaces. Tablespaces are only useful if you have multiple physical disks, so you can define which data is stored on which physical disk. The directory where your data is located is not that important for the workings of postgres, so if you only have one disk it is useless to use tablespaces
Should your data grow beyond the capacity of 1 disk, you will have to perform a full data migration anyway to move it to another physical disk, so you can configure tablespaces at that time
The idea behind defining which data is located on which disk (with tablespaces) is that you can do things like putting a big table which is hardly used on a slow disk, and putting this very intensively used table on a separated faster disk. But I assume you're not there yet, so don't over complicate things

##DBTS and BInaryFormatter

I have written a client that uses the SyncFramework to coordinate the consolidating of data in a hub and spoke architecture warehousing application.
When the sync transactions process the sync framework updates a specified anchor table with the value of ##DBTS, indicating when the last sync was processed and uploaded to the server.
I would like to offer as part of this scenario the ability to allow one client to relay the data on behalf of one of the others.
This would be used in cases where one client may not be able to make contact with the warehouse; its database could be retrieved and synchronized by a client that does have access to the warehouse (Exchanged as a database backup on DVD or USB flash media).
The problem with this theory is that without the SentAnchor being set on the client database when the snapshot is retrieved, the next time this process is performed, the whole database is replicated in a second time.
What I would like to do, is when I grab a snapshot of the client database, update its SentAnchor so the next time I grab a copy the sync framework will know its SentAnchor as if it had actually communicated with the server.
So my first impulse was to simply update the anchor table, set the SentAnchor to ##DBTS, however the problem with that is sync framework inserts the same value in a different format, it runs it through the BinaryFormatter first.
So same intrinsic value, different headers, and when I try just updating with the value of ##DBTS, the SyncFramework errors trying to convert that back from the format it anticipates to have set itself.
What I would like to do is set via a TSQL statement, the same format for ##DBTS that the sync framework uses; I do not want to have to write an application to execute a single SQL statement if this can be done in the statement already being executed to create the backup.
Something like...
USE MyDB
GO
BACKUP DATABASE MyDb
TO DISK = 'F:\01032012MyDb.bak'
WITH FORMAT,
NAME = '20120103 Full Backup of MyDb'
GO
UPDATE Anchor SET SentAnchor = ##DBTS
GO
Essentially replacing ##DBTS above with whatever is needed to get the same value into the correct fromat that the SyncFramework will use.
Servers are 2008R2 Express.
the problem with setting the SentAnchor is that you might actually miss uploading changes. by setting the value, you have effectively told Sync Framework it has sent changes up to that value of ##DBTS.
i suggest you explore using the SqlSyncProvider instead.

Restoring default records to a Core Data database

I have an iPhone app that has a sqlLite Core Data model that is pre-loaded with default data. I want to enable the user to restore this default data if they have modified or deleted records from the model, while retaining any new records added to the model by the user.
The sqlLite database is copied to the users documents directory on first run, so the untouched original database is available in the app package. What is the easiest way to copy records between the two databases? I assume that it involves setting up an additional persistentStoreCoordinator, or adding the original dB to the coordinator as an additional persistentStore, but the docs are skimpy on how to do this.
Thanks,
Jk
If you do not want to delete the destination store and just overwrite it then the workflow is:
Stand up a second Core Data stack with the source persistent store.
Fetch each entity from the source.
Look for the object in the destination.
If it exists, update it.
If it doesn't, create it.
Save the destination store.
Depending on how much data you have, this can be a very expensive operation.