I have transaction log file that goes back 6 months. I need to roll back everything that happened after 5/20/2013 from a database. Can anyone please enlighten me on how to do this?
First of all, copy the database MDF and LDF files. Better safe than sorry
The database can be restored to a point in time in SQL Server 2008R2, also. There's no need to create a transaction log backup first, it'll be done automatically by SQL Server. You can find more about the log-tail backup here: Tail-Log Backups
Select to restore the database in the database context menu
Leave Database as Source. Click Timeline
Select Specific date and time. If you drag the time pointer, you'll able to see how long back your transaction log goes. Note that bright green shows that the transactions have never been backed up
After all is done, schedule transaction log backups. There's no point to have a database in Full recovery model and never backup the online transaction log
Related
I'm trying to set up an architecture with 2 databases, say preview and live, that have the exact same schemas. The use case is that edits can be made to the preview database and then pushed to the live database after they are vetted and approved. The production application would read from the live database.
What would be the most appropriate way to push all data from the preview database to the live database without bringing the live database down? Ideally the copy from preview to live would be an atomic transaction.
I've worked with this type of setup in MSSQL, but I'm fairly new to Postgres. So I'm open to hearing other ways to architect this (with Schemas perhaps?).
EDIT: The main reason to use separate databases is that I may need more than 1 target database (not just a single "live" database). I also may need to switch target databases on the fly without altering the source database schema.
I think what you're looking for is a "hot standby". This would be a separate instance of Postgresql, possibly on the same server but usually not, which is a near-real-time replica of the primary server.
In broad strokes, this is done by shipping the binary transaction logs from the primary server to the backup server, and then "replaying" them there. The exact mechanism for transmitting the logs may vary depending on your requirements.
Fortunately, the docs on this are excellent:
https://www.postgresql.org/docs/9.3/static/warm-standby.html
https://www.postgresql.org/docs/9.0/static/hot-standby.html
When I'd modified or create new tables in Liferay 6.1 and deploy in production server. Liferay automatically makes a backup of each table.
This backup takes a long time when a table has more than 10k records. And a century when has 100k. Although this table hasn't been modified.
What can I do for optimice the deployment to the server?
Many thanks in advance,
At the moment I think only two options are available:
(easy way) Set "build.auto.upgrade=false" in /WEB-INF/src/service.properties to avoid any automatic updates, and perform the db changes (if any) manually.
(hard way) Reworite the Liferays ServiceBuilder so that it perform an update only on those tables which were changed. This will require an EXT development as it is a very core change, and for every next Liferay version you will need to carefully review it and upgrade.
Liferay automatically makes a backup? Where to? This is new to me!
Also, you're describing a "no go" operation: You don't modify any tables in the database. Period. That's what the API is there for. If you do, prepare for disaster, sooner (if you're lucky) or later (according to Murphy, when you forgot that you changed manually. Then you'll blame Liferay for the failure that you caused by manipulating the database).
Do you have your own backup routine implemented that runs on every server restart? This is the only thing that I can imagine to happen here - in that case you'll need to modify your backup strategy. Or the database that you use - maybe a transaction log backup makes more sense than duplicating the table content into the same database...
We currently have one publisher and four subscribers using merge replication. Due to a change in the schema somebody performed a “Reinitialize All subscriptions” action without checking the “Upload the changes at the subscriber before reinitializing” option. When the replication agent for the first server was started, the database was cleaned out. (All tables dropped and recreated) And all of the changes since the last successful synchronization were lost. At this point we decided to disable the replication schedule completely. My question is, is there a way to undo the “Reinitialize All subscriptions” action? Preferably, in such a way, that all of the changes at the subscribers aren’t lost.
Thanks in advance,
David
We were able to restore a backup of the publisher database prior to the reinitialize action. (This was done after creating a separate backup for the current publisher database.) After this we manually re-applied the changes which had been done since the reinitialize action from the database with the reinitialize action in it to the restored backup. (We used Redgate sql data compare). At this point we were able to start the replication process and everything worked as it should. So apparently the snapshot information is completely stored inside the database to which it applies.
A special thanks to Hilary Cotter for pointing this out.
I am creating an application with sqlite. I am performing all kind of task on the database Insert, Update, Delete, Select.
For that I open the database every time, Then execute my query using sqlite3_step() and after the result I use sqlite3_finalize() and sqlite3_close() methods. It is working well in most cases. I am not getting when its happening but some times my database gets locked with the same process I follow and some time it works.
I need to unlock database so even in any case my database get locked then I can unlock it or Plz guide me if I can check by code that my database is locked so I can replace my database with the resource database.
I am using webservice too so I don't have issue about data loss.
Is it make sense if I replace my database if it get locked or if there is any way to unlock the database.
Open database once in the beginning. And close it in the end in AppWillterminate function. You are only consuming time by opening and closing it in every database function.
As far as database lock is concerned, it gets locked when some application is still using it and other application is trying to get its access.
This could be your app, or possibly the sqlite manager add-on of your firefox.
I faced same problem once and what i did was
Disabled the option in sqlite add-on where it remembers the previously opened database.
Restart xcode, simulator.
Make a copy of the sqlite file (desktop), delete it from the project and then add in project again from desktop.
The last solution sounds weird, but i was mad that time.
I hope this could help you.
Short version
If my process is terminated in the middle of a transaction, or while SQLite is committing a transaction, what are the chances that the database file will be corrupted?
Long version
My application uses an SQLite database for storage (directly, not via Core Data). I'm working on a new version of the application which will require an update to the database schema. On launch, the app will check the database and, if it needs updating, execute a series of SQL statements to do so.
Depending on the amount of data in the database, the update may be long running (on the order of seconds), so I need to consider the possibility that the process may be terminated before the update is completed. (For context, this on an iPhone, where the processor is slow and the app may be terminated by an incoming phone call.) I will, of course, wrap the upgrade SQL statements in a transaction. Will that be enough to guarantee that the database will not be corrupted?
I'm assuming that transactions work as advertised, and that if the process is terminated in the middle of the transaction, the file will be OK. But I'm also assuming there is a window of time during the COMMIT where something can go wrong.
To play it safe, I could create a backup copy of the database file before starting the update, but if the transactions are safe then that would be overkill. It would also make the update process take longer, which increases the chance it would be interrupted, and then I'd have to consider that the file copy operation might be interrupted... I'd like to keep the code as simple as possible (but no simpler).
In the course of researching this question I've started reading "Atomic Commit In SQLite", which is more detail than I probably need to know, but is giving me faith that I don't need to second-guess SQLite's ability to protect the database file. But I'd still like to hear from Stack Overflow: is a transaction good enough, or should I be more cautious?
I have read the Atomic Commit in SQLite document. It may not be overkill if you really want to understand what's going on, but in a nutshell, a transaction goes like this:
Lock the database file
Create the rollback journal
Determine what portions of the database file are going to be changing
Write copies of those pages to the journal file
Write the journal file header
Write your intended changes to the database file
Delete the rollback journal (THIS IS THE COMMIT)
When the user is done talking to mom and re-starts your app, when it tries to open the database file, if there is a rollback journal present, it will write the original data back to the datafile using a similarly safe process. Even if you lose your transaction, and lose a rollback, it will eventually be taken care of once mom's nervous breakdown is properly thwarted and he can run the app for more than a couple seconds at a time.
If it were me, I would trust the transactions. With so many users of SQLite, even in embedded apps, I think transaction commit failures would be a very hot topic all over the net if they weren't working properly.
Are you using CoreData with a SQLite backend? If so, I actually find that the best way to handle this problem is to create two separate NSManagedObjectContexts (a read-only and an editing). When the process completes, just save the "editing" context and then the two contexts will be in sync. If something happens during your operation, the editing context won't get saved, so you'll be fine.