Need some sophisticated thing - somehow like Liquibase, but not for DB. We have many *.bat files, that need to be executed per each release on PROD. Problem is that only new and updated files should be executed, others shoud be avoided.
Is there some thing to do that? Thanks!
Related
In Play (scala) I have a number of evolutions in conf/evolutions/default called 1.sql, 2.sql etc...
Some of these are from playing around, and some are from tutorial code I no longer use.
How do I get rid of these evolutions?
The obvious approach of deleting the evolution files does not seem to work. If you remove the file, the evolution is still applied. Altering the file works, and so the current workaround is emptying the .sql files when they are longer required.
In virtually every migration framework/library/approach it works the same:
if you are using migrations/evolutions seriously (you deploy to production or at least you cooperate with other people, who wouldn't want their environment broken) - you simply don't remove migration. If you want to remove it, write a new migration that reverts the previous one.
if changes are only on your own branch, you haven't deployed it anywhere, you haven't shared your code - remove file, remove lines form file and drop and recreate database - migrations are backed up in execution in the database they are executed against (at least the majority of the tools I used do this), so if you want to get rid of migration it also need to be removed the the table that stores executed migrations. To ensure that things are consistent, the easiest way is to drop and rerun migrations/evolutions.
I cannot stress this enough - if you deployed you code anywhere, do not remove the migration. Hell can break loose. But it you haven't deployed it anywhere, because it's e.g. just a tutorial, just drop the database and do whatever you want.
I have a scenario where I have a lot of files in a CSV file i need to do operations on. The script needs to be able to handle if script is stopped or failed, then it should continue where i stopped from. In a database scenario this would be fairly simple. I would have an updated column and update that when operation for the line has completed. I have looked if I somehow could update the CSV on the fly, but I dont think that is possible. I could start having multiple files, but not that elegant. Can anyone recommend some kind of simple file based DB like framework? Where I from PowerShell could create a new database file (maybe json) and read from it and update on the fly.
If your problem is really so complex, that you actually need somewhat of a local database solution, then consider to go with SQLite which was built for such scenarios.
In your case, since you process an CSV row-by-row, I assume storing the info for the current row only will be enough. (Line number, status etc.)
Trying find an example or a starting point for a project I have to restore databases into a test environment. I have a list of 40+ sql instances, databases, and backup location and like to use the cmdlet Restore-SQLDatabases but only allow 3 restores to occur at a time. To minimize the impact on our network/storage I don't want to initiate all 40+ restores at one time. The list of what needs to be restored are contained in a csv and when testing can get the restores to go but not sure what options I'd have to only thread only 3 at a time.
I used the RunspaceFactory example and modified it to use a script-block to execute Restore-SqlDatabase. I'm sure there may be cleaner or simpler ways of doing this but so far it seems to work.
I am working on a system that currently has a number of environments (test, stage, live, etc) each with their own database. So far these databases have been kept in sync by running the same update scripts on each of them.
We are now moving to using EF6 code first migrations, and would also like to start writing some automated system tests using LocalDB.
I've found https://msdn.microsoft.com/pt-pt/data/dn579398 which describes two options for adding an initial migration.
The first method creates an empty initial migration which will work great for the existing environments but won't help with creating LocalDBs for testing.
The second method creates a migration to bring up the whole database from scratch (minus things EF doesn't care about such as sprocs and views). This would be acceptable for testing, but not good for actually recreating a databse. It also requires you to manually comment out the Up method, run the migration on all existing databases, and then put the Up method back. As it will take a while to get the migration through all the environments I'm not keen on this. It also violates the one of the principles of migrations which is that they shouldn't be edited once they've been released.
Having some kind of conditionality in migrations would solve my problem (e.g. if(tableExists("A_table_in_the_existing_database") return;) but there doesn't seem to be anything like that available.
The best I've come up with is to dump the existing database schema from SQL server to a file (which has the advantage of preserving sprocs, views, etc) and then use option 2 above, except instead of using the generated Up method I'll run the SQL file.
However, this still has the drawbacks of option 2 mentioned above, so I'd be very happy to learn of a better way of handling this situation.
Edit:
Would this work? Run the commented out initial migration on one database, then dumping out the __MigrationHistory table and inserting it into the other databases? That way I wouldn't have to wait for the migration to make it through all the environments before I could safely uncomment it.
EF 6.1.2 has support for running SQL embedded as a resource within the assembly containing the migrations using the SqlResource method.
I think I'd go with scripting out your existing schema and using an initial migration that executes SqlResource as its Up. If all it's doing is a bunch of IF EXISTS then it shouldn't take too long to run. Otherwise scripting out __MigrationHistory will also work if you just want to run it once locally and apply to all your other databases by hand.
I am new to DB and I needed it for a project.My problem is as follows: I have 3 scripts that write to Postgres DB and another script that does updates on it. So far, with that I haven't had any issues. However, now I need to read that data at the same time. More specifically from that DB, I need to read last 1 min data meanwhile. And I have another script for that. But, when I run this script, I can't see any writes from the scripts that is supposed to write. Any suggestions?
Chances are your other scripts haven't COMMITed their data yet, which means that their updates aren't visible to your queries yet.