Taking export backup of a database only upto certain date - oracle10g

I have an oracle database(10 g edition). I want to take the export backup of the database upto some particular date. The backup should contain all the upto that specific date. Is there any way to perform the operation?

Related

Efficient archiving monthly older than 1 year old for distributed Mongo DB

We have a distributed /shared mongo db (5 to 6 replica). I want to back up huge tables data if older than 1 year old (ie). And delete after backup.
(I dont want any data loss, so i dont want to delete a collection completely)
Source collection is customer (ie)
Target collection is customer_archive
Note: Archived data should not be lost and should not have duplication same data in archived data . I assumed we need merge/upser mechanism. But dont know best way to do.
what is the most efficient way to archive over 200-300 gb data (ie) (milions data everyday i mean growing fast, so querying getting difficult and performans )
Archive collection might be in different database, how to write
script?
How can i schedule that script monthly/daily?
How woud you do that if backup will be different database (i thought first export and move to another db)
My perspective was, first taking dump collection different name. And copy and delete yearly data by bulk Insert /bulk delete (read each 10.000 foreach commit). Some do not recommend foreach for efficiency but i assume the only way for partial deletion.
I dont know if i can make a batch file ,so i can share it with our BT team for as task scheduling.

DB2 append backups into one unique db

I am using the restore db to import backup into a test environment database that works fine, but I need to extend this import process to several backups from several dates into a unique test environment db... What is the command to append backups into a unique database...
thanks
Phil
You write "I get only full offline database backup files from several time date (2 weeks of production) that I need to restore in a test database for analysis... example 4 backups files of 2 weeks of data = 2 months of data ...."
and you also write "What is the command to append backups into a unique database..."
While there is no explicit method for full-offline-backup images to be combined for Db2-LUW, there's always another way to get what you need...given the right skills and tools.
IF you have a FULL backup image, it can either be restored to a new database, or it can fully overwrite an existing database. If you have 4 FULL backup images, each can be restored either into a (uniquely named) database (or overwrite 4 existing databases).
You can also restore specific tablespaces from a backup image, if properly configured. Some sites have designed discrete tablespaces for specific time periods (one per day/week/month) to help with such activities. Some sites have designed their tables to be range partitioned (with each time period having its own partition (and sometimes dedicated tablespaces also), and this makes subsequent merging of content more easy with the right skills.
If you are competent with scripting, you can restore the first (earliest) image, export the relevant table contents to flat-files, restore the next backup image and export the relevant tables to new flat-files (repeat as needed), then load these flat-files into a table for analysis. If your database size is small then this can be considered a keep-it-simple approach.
You can also do clever things with federation if you restore to discrete databases.
Separately purchasable tools exist to let you extract selected content from a backup image (which can then be loaded into a Db2 database), without needing to do a restore action. These are not included with the Db2 product. So you could extract specific table contents from a backup image if you pay for the right tools and learn how to use them. Speak with your IBM Salesperson. Such tools may require currently supported versions of Db2 however.

How can I automatically maintain a dump of modified rows in PostGreSql

So, I have a PostGreSQL DB. For some chosen tables in that DB I want to maintain a plain dump of the rows when modified. Note this dump is not a recovery or backup dump. It is just a file which will have the incremental rows. That is, whenever a row is inserted or updated, I want that appended to this file or to a file in a folder. Idea is to load that folder into say something like hive periodically so that I can run queries to check previous states of certain rows, columns. Now, these are very high transactional tables and the dump does not need to be real time. It can be in batches, every hour. I want to avoid a trigger firing hundreds of times every minute. I am looking for something which is off the shelf - already available in PostGreSQL. I did some research but everything is related to PostGreSQL backup - which is not the exact use case.
I have read some links like https://clarkdave.net/2015/02/historical-records-with-postgresql-and-temporal-tables-and-sql-2011/ Implementing history of PostgreSQL table etc - but these are based on insert update trigger and create the history table on PostGreSQL itself. I want to avoid both. I cannot have the history on PostGreSQL as it will be huge soon. And I do not want to keep writing to files through a trigger firing constantly.

How we create dump file of daily inserted, updated and deleted records in postgres?

We are creating dump file of multiple database at every one hour but the issue is storage and time of restoration. We have multiple location create dump file and using vpn to get the file. file become large day by day. Please suggest me any solution

How to bulk-refresh postgres database

I've got a Postgres 9.1 database that contains weather information. The dataset consists of approximately 3.1 million rows.
It takes about 2 minutes to load the data from a CSV file, and a little less to create a multicolumn index.
Every 6 hours I need to completely refresh the dataset. My current thinking is I would import the new dataset into a different database name, such as "weather_imported" and once the import and index creation are finished, I would drop the original database and rename the imported database.
In theory, clients would continue to query the database during this operation, though if that has ill effects, I could probably arrange to have the clients silently ignore a few errors.
Questions:
Will that strategy work?
If a client happened to be in the
process of running a query at the time of DB drop, my assumption is
the database would not complete the drop until the query were
finished - true?
What if a query happened between the time the
DB were dropped and the rename? I assume a "database not found"
error.
Is there a better strategy?
Consider the following strategy as an alternative:
Include a "dataset version" field in the primary table.
Store the "current dataset version" in some central location, and write your selects to only search for rows which have the current dataset version.
To update the dataset:
Insert all the data with a new dataset version. (You could just use the start time of the update job as a version.)
Update the "current dataset version" atomically to the value you just inserted.
Delete all data with an older version than the version number you just inserted.
Presto -- no need to shuffle databases around.