Storing data in array vs text file - powershell

My database migration automation script used to require the user to copy the database names into a text file, then the script would read in that text file and know which databases to migrate.
I now have a form where the user selects which databases to migrate, then my script automatically inserts those database names into the text file, then reads in that text file later in the script.
Would it be better practice to move away from the text file all together and just store the data in an array or some other structure?
I'm also using PowerShell.

I'm no expert on this, but I would suggest keeping the text file even if you choose to use the array or form only approach. You can keep the text file as sort of a log file, so you don't have to read from it, but you could write to it so you can quickly determine what databases were being migrated if an error happens.
Although in a production environment you probably have more sophisticated logging tools, but I say keep the file in case of an emergency and you have to debug.
When you finish migrating and determine in the script that everything is as it should be, then you can clear the text file or keep it, append the date and time, and store it, as a quick reference should another task come up and you need quick access to databases that were migrated on a certain date.

Related

Working with PowerShell and file based DB operations

I have a scenario where I have a lot of files in a CSV file i need to do operations on. The script needs to be able to handle if script is stopped or failed, then it should continue where i stopped from. In a database scenario this would be fairly simple. I would have an updated column and update that when operation for the line has completed. I have looked if I somehow could update the CSV on the fly, but I dont think that is possible. I could start having multiple files, but not that elegant. Can anyone recommend some kind of simple file based DB like framework? Where I from PowerShell could create a new database file (maybe json) and read from it and update on the fly.
If your problem is really so complex, that you actually need somewhat of a local database solution, then consider to go with SQLite which was built for such scenarios.
In your case, since you process an CSV row-by-row, I assume storing the info for the current row only will be enough. (Line number, status etc.)

Using Data compare to copy one database over another

Ive used the Data Comare tool to update schema between the same DB's on different servers, but what If so many things have changed (including data), I simply want to REPLACE the target database?
In the past Ive just used TSQL, taken a backup then restored onto the target with the replace command and/or move if the data & log files are on different drives. Id rather have an easier way to do this.
You can use Schema Compare (also by Red Gate) to compare the schema of your source database to a blank target database (and update), then use Data Compare to compare the data in them (and update). This should leave you with the target the same as the source. However, it may well be easier to use the backup/restore method in that instance.

Multiple scripts accessing common data file in parallel - possible?

I have some Perl scripts on a unix-based server, which access a common text file containing server IPs and login credentials, which are used to login and perform routine operations on those servers. Currently, these scripts are being run manually at different times.
I would like to know that if I cron these scripts to execute at the same time, will it cause any issues with accessing data from the text file (file locking?), since all scripts will essentially be accessing the data file at the same time?
Also, is there a better way to do it (without using a DB - since I can't, due to some server restrictions) ?
It depends on which kind of access.
There is no problem in reading the data file from multiple processes. If you want to update the data file while it could be read, it's better to do it atomically (e.g. write a new version under different name, than rename it).

Extract Active Directory into SQL database using VBScript

I have written a VBScript to extract data from Active Directory into a record set. I'm now wondering what the most efficient way is to transfer the data into a SQL database.
I'm torn between;
Writing it to an excel file then firing an SSIS package to import it or...
Within the VBScript, iterating through the dataset in memory and submitting 3000+ INSERT commands to the SQL database
Would the latter option result in 3000+ round trips communicating with the database and therefore be the slower of the two options?
Sending an insert row by row is always the slowest option. This is what is known as Row by Agonizing Row or RBAR. You should avoid that if possible and take advantage of set based operations.
Your other option, writing to an intermediate file is a good option, I agree with #Remou in the comments that you should probably pick CSV rather than Excel if you are going to choose this option.
I would propose a third option. You already have the design in VB contained in your VBscript. You should be able to convert this easily to a script component in SSIS. Create an SSIS package, add a DataFlow task, add a Script Component (as a datasource {example here}) to the flow, write your fields out to the output buffer, and then add a sql destination and save yourself the step of writing to an intermediate file. This is also more secure, as you don't have your AD data on disk in plaintext anywhere during the process.
You don't mention how often this will run or if you have to run it within a certain time window, so it isn't clear that performance is even an issue here. "Slow" doesn't mean anything by itself: a process that runs for 30 minutes can be perfectly acceptable if the time window is one hour.
Just write the simplest, most maintainable code you can to get the job done and go from there. If it runs in an acceptable amount of time then you're done. If it doesn't, then at least you have a clean, functioning solution that you can profile and optimize.
If you already have it in a dataset and if it's SQL Server 2008+ create a user defined table type and send the whole dataset in as an atomic unit.
And if you go the SSIS route, I have a post covering Active Directory as an SSIS Data Source

Archiving text log files in postgresql

We are writing a testing framework from scratch using Perl. Each test case writes a log file and we are planning to archive the resulting log files created by each test case for reporting purposes.
Now we are using PostgreSQL database for storing the results. Now how do I archive the text log file in PostgreSQL database? I googled and found out that bytea datatype can be used to store files in binary format. If I do so how do i retrieve it back as text?.
Any ideas will be appreciated.
If your log files are text files, then you should use the TEXT datatype to store them. If the log files are binary (or, perhaps, compressed text files), then you'd want to use BYTEA. In either case, you can INSERT and SELECT them just like any other column type when using DBI. If they're really large then you might want to play with the LongReadLen DBI parameter and read the DBI manual section on BLOBs.