When I export and re-import my OrientDB database, the IDs aren't retained (the import command seems to just assign new IDs), which makes the snapshotting kind of useless. How do I ensure that the IDs are kept consistent?
Use: -preserveClusterIDs = true https://orientdb.com/docs/last/Console-Command-Import.html
Related
I want to copy the database's specified collection into new database.
I searched and found there is something Trigger technique which will update copying database whenever any modification happened in original database but it cost must so I want any other alternative solution.
I also want rules for copying something like I only want few fields of particular collection however its not much important but main task is copying original database collection into new database in real time.
We can say something like backup
you can create another collections as history and save all record like backup to history time of save records in rest api
I've a bunch of collections (12) and I need to rename many fields of them. I can't do it live in the database, all I can do is download and reupload a dump of it.
So I've downloaded the collection with mongodump manipulated the data and I'm planning to use mongorestore to push it back on the database.
I'm wondering what will happen then with ObjectIds.. I know that an objectId is unique throughout the database so I'm thinking about deleting all the old data right before using mongorestore, is it ok or will I still have problems with the ids?
You can specify any value for MongoID whatever you want. You even can use string instead of MongoID.
If you have production app you need to perform upgrade and migrate data by application itself step by step.
If you have one proccess singlethreaded application or if you can run your app in that way - it is most simple case. Else you need synchronization service.
Be carefull with async/await and promises and so on asyncronous processes. They receive and have in memory one the data in one time and continue process with that data in another time, and it need to have in mind that.
You need to do:
modify service to be ready to both data format
create modification code which will go through all the data and migrate it
modify service to be ready only to new data format once all the data migrate done
I need to know - Is it any possibility to restore data in collection or database if it was dropped?
The OS, by default (or in the case of Windows: any case) will not allow you to restore deleted data. You will need a third party program which can read the sectors. It is also good to note that while database drops will delete the files collection drops will not, instead they get nulled.
Dropping a collection should make it near on impossible to retrieve the data since the hard drive sectors that were used have now been overwritten with new data (basically one pass 0).
So the files may be recoverable on a database drop but that is still questionable.
I'd like to create a read-only snapshot of a database at the end of each day, and keep them around for a couple of months.
I'd then like to be able to run queries against a specific (named) snapshot.
Is this possible to achieve elegantly and with minimal resource usage (the database only changes very slowly, but has a few GBs of data - so almost all data is common to all snapshots).
The usual way to create a snapshot in PostgreSQL is to use pg_dump/pg_restore.
A much quicker method is to simply use CREATE DATABASE to clone your database.
CREATE DATABASE my_copy_db TEMPLATE my_production_db;
which will be much faster than a dump/restore. The only drawback to this solution is that the source database must not have any open connections.
The copy will not be read-only by default, but you could simply revoke the respective privileges from the users to ensure that
I have an iPhone app that has a sqlLite Core Data model that is pre-loaded with default data. I want to enable the user to restore this default data if they have modified or deleted records from the model, while retaining any new records added to the model by the user.
The sqlLite database is copied to the users documents directory on first run, so the untouched original database is available in the app package. What is the easiest way to copy records between the two databases? I assume that it involves setting up an additional persistentStoreCoordinator, or adding the original dB to the coordinator as an additional persistentStore, but the docs are skimpy on how to do this.
Thanks,
Jk
If you do not want to delete the destination store and just overwrite it then the workflow is:
Stand up a second Core Data stack with the source persistent store.
Fetch each entity from the source.
Look for the object in the destination.
If it exists, update it.
If it doesn't, create it.
Save the destination store.
Depending on how much data you have, this can be a very expensive operation.