How to catch unexpected error running Liquibase in PowerShell - powershell

I have a small CI/CD script which is written in PowerShell, but I don't know how to stop it, if in the script I get unexpected error running Liquibase. All scripts in SQL and work (preconditions in place where I need to add), but I want to have more opportunity to control CI/CD script. Now, if the script gets the exception, it continues execution. The script updates some schemes and some of them has an influence on each other, so order is important.
#update first scheme - ETL (tables, global temp tables, packages)
.\liquibase update --defaults-file=import.properties
#how to stop this script, if I get unexpected error running Liquibase?
#update second scheme - data (only tables and roles for data)
.\liquibase update --defaults-file=data.properties
#update third scheme - views, tables and other for export data
.\liquibase update --defaults-file=export.properties

Have you tried this?
$result = Start-Process -filepath 'path to liquibase' -ArgumentList "your liquibase arguments go here" -wait
if($result.ExitCode -ne 0){
write-host 'something went wrong'
}

Related

Parent Powershell script doesn't print messages from child script in Azure Pipeline

We have a Powershell script (A), which is executing another Powershell script (B) as another user (domain user). The child script (B) executes a range of SQL queries against a database on different servers in the domain. While the child script (B) is executing, it prints several status messages using Write-Host, which then in turn are captured by the parent script (A) and printed to the console. This works perfectly, when I am executing the parent script (A) manually from the Powershell prompt on my development machine (in the same domain). And I get all the output from the child script (B) as well.
Below is the part of the code in the parent script (A) that executes the child script (B).
try{
$ProcessInfo = New-Object System.Diagnostics.ProcessStartInfo
$ProcessInfo.FileName = "powershell.exe"
$ProcessInfo.Domain = "jmres"
$ProcessInfo.UserName = $username
$ProcessInfo.Password = ConvertTo-SecureString $password -AsPlainText -Force
$ProcessInfo.RedirectStandardError = $true
$ProcessInfo.RedirectStandardOutput = $true
$ProcessInfo.UseShellExecute = $false
$ProcessInfo.Arguments = $file, $fileArguments
$Process = New-Object System.Diagnostics.Process
$Process.StartInfo = $ProcessInfo
$Process.Start() | Out-Null
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
Write-Host $stdOutput
Write-Host $stdError
$Process.WaitForExit()
}
catch{
Write-Host "Could not execute script."
Write-Error $Error[0]
Exit
}
Now in our Azure Release Pipeline we are executing the parent script (A) automatically, via a Powershell task, when deploying our application. The child script (B) runs fine as the specified user and I can verify, that SQL queries are executed correctly against the given SQL server.
Okay. So the problem here is, that status messages from the parent script (A) are printed to the Powershell console in the Pipeline. But status messages from the child script (B) are not. I have been struggling for days now trying to figure out why.
Things I have tried:
Replacing Write-Host with Echo in the child script (B) to redirect output to the pipeline instead of directly to the console as described here.
Tried to only print the StandardOutput stream, and omit the StandardError stream to avoid a potential deadlock as described on this site.
Tried turning off (set $false) redirection of the StandardOutput and StandardError streams and have the output sent to the child scripts (B) own console window, and then hope for it to show up in the Azure Pipeline console.
Nothing works. The output in the Azure Pipeline console is two empty lines, where I would expect the output from the child script (B) to be shown. My findings this far is, that the problem is these two lines.
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
The two variables are empty. Why doesn't ReadToEnd() get anything from the streams? And why does it work when executed manually from the Powershell prompt but not in Azure Pipeline? Does anyone have any ideas what to try next?
UPDATE 1 (important):
So this is embarrassing. This morning I discovered a huge brain fart of mine. I though I had verified, that SQL queries where fired against the database. I was wrong. The child script (B) is not executed at all. Hence no SQL queries are fired against the database, when the parent script (A) is executed from the Azure Pipeline. That answers why I get no output. Strange thing is, that it works when I execute the parent script (A) manually from the Powershell prompt.
This is clearly an issue regarding user context while executing scripts and not, as I thought, capturing output from the child script (B). I don't know whether to delete this question. An admin must decide.
Please check Logging commands to print to the console in pipeline. For example:
Set the variables:
- pwsh: |
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "##vso[task.setvariable variable=outputSauce;isoutput=true]canned goods"
name: SetVars
Read the variables:
- pwsh: |
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
Write-Host "Future jobs can also see $env:SETVARS_OUTPUTSAUCE"
write-Host "Future jobs can also see $(SetVars.outputSauce)"
Console output:
Non-secrets automatically mapped in, sauce is crushed tomatoes
Secrets are not automatically mapped in, secretSauce is
You can use macro replacement to get secrets, and they'll be masked in the log: ***
Future jobs can also see canned goods
Future jobs can also see canned goods

Scheduling a Powershell process does not yield the same results as when I run it manually

I wrote a small PowerShell script that I am using to query the Server Log, clean the return values and use some of the results to perform some server maintenance. However, when I schedule the save to file piece is not writing the whole content to the file and it is getting truncated, just like what I ma posting below, exactly. As you can observe, the end of the file is truncated with three dots added to replace the missing values:
Login failed for user 'sa'. Reason: An error occurred while evaluating the password. [CLIENT: 2...
However, if I run the code manually with Local Admin access, the content gets saved to the local file like this, exactly:
Login failed for user 'sa'. Reason: An error occurred while evaluating the password. [CLIENT: 112.103.198.2]
Why is this the case when I schedule the process or PS file to run under a schedule. BTW, I tried to run it under the SYSTEM context with full or highest privileges and even used the same Admin account that I use to run it manually to schedule and still do nt get the full content of the event that I save.
This is creating an issue and I am not able to use the content to process the IP.
Here is the PS code that I am using to query and save the content to file:
$SQL = 'C:\SQL.txt'
Remove-Item $SQL -ErrorAction Ignore
Get-EventLog -LogName Application | Where-Object {$_.EventID -eq 18456} |
Select-Object -Property Message | Out-File $SQL
The problem lies with out-file because it has a default character limit of 80 per line.
You can change it with -width property and give a value of say 200. However set-content doesn't have these limits set in. So it might be a more suitable option.
All that being said, I am not sure why it does it one way when ran manually vs another when the system runs it.
Out-file defaults to unicode when writing files
set-file defaults to ascii when writing files

How to determine if Write-Host will work for the current host

Is there any sane, reliable contract that dictates whether Write-Host is supported in a given PowerShell host implementation, in a script that could be run against any reasonable host implementation?
(Assume that I understand the difference between Write-Host and Write-Output/Write-Verbose and that I definitely do want Write-Host semantics, if supported, for this specific human-readable text.)
I thought about trying to interrogate the $Host variable, or $Host.UI/$Host.UI.RawUI but the only pertinent differences I am spotting are:
in $Host.Name:
The Windows powershell.exe commandline has $Host.Name = 'ConsoleHost'
ISE has $Host.Name = 'Windows PowerShell ISE Host'
SQL Server Agent job steps have $Host.Name = 'Default Host'
I have none of the non-Windows versions installed, but I expect they are different
in $Host.UI.RawUI:
The Windows powershell.exe commandline returns values for all properties of $Host.UI.RawUI
ISE returns no value (or $null) for some properties of $Host.UI.RawUI, e.g. $Host.UI.RawUI.CursorSize
SQL Server Agent job steps return no values for all of $Host.UI.RawUI
Again, I can't check in any of the other platforms
Maintaining a list of $Host.Name values that support Write-Host seems like it would be bit of a burden, especially with PowerShell being cross-platform now. I would reasonably want the script to be able to be called from any host and just do the right thing.
Background
I have written a script that can be reasonably run from within the PowerShell command prompt, from within the ISE or from within a SQL Server Agent job. The output of this script is entirely textual, for human reading. When run from the command prompt or ISE, the output is colorized using Write-Host.
SQL Server jobs can be set up in two different ways, and both support capturing the output into the SQL Server Agent log viewer:
via a CmdExec step, which is simple command-line execution, where the Job Step command text is an executable and its arguments, so you invoke the powershell.exe executable. Captured output is the stdout/sterr of the process:
powershell.exe -Command x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
via a PowerShell step, where the Job Step command text is raw PS script interpreted by its own embedded PowerShell host implementation. Captured output is whatever is written via Write-Output or Write-Error:
#whatever
Do-WhateverPowershellCommandYouWant
x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
Due to some other foibles of the SQL Server host implementation, I find that you can emit output using either Write-Output or Write-Error, but not both. If the job step fails (i.e. if you throw or Write-Error 'foo' -EA 'Stop'), you only get the error stream in the log and, if it succeeds, you only get the output stream in the log.
Additionally, the embedded PS implementation does not support Write-Host. Up to at least SQL Server 2016, Write-Host throws a System.Management.Automation.Host.HostException with the message A command that prompts the user failed because the host program or the command type does not support user interaction.
To support all of my use-cases, so far, I took to using a custom function Write-Message which was essentially set up like (simplified):
$script:can_write_host = $true
$script:has_errors = $false
$script:message_stream = New-Object Text.StringBuilder
function Write-Message {
Param($message, [Switch]$iserror)
if ($script:can_write_host) {
$private:color = if ($iserror) { 'Red' } else { 'White' }
try { Write-Host $message -ForegroundColor $private:color }
catch [Management.Automation.Host.HostException] { $script:can_write_host = $false }
}
if (-not $script:can_write_host) {
$script:message_stream.AppendLine($message) | Out-Null
}
if ($iserror) { $script:has_errors = $true }
}
try {
<# MAIN SCRIPT BODY RUNS HERE #>
}
catch {
Write-Message -Message ("Unhandled error: " + ($_ | Format-List | Out-String)) -IsError
}
finally {
if (-not $script:can_write_host) {
if ($script:has_errors) { Write-Error ($script:message_stream.ToString()) -EA 'Stop' }
else { Write-Output ($script:message_stream.ToString()) }
}
}
As of SQL Server 2019 (perhaps earlier), it appears Write-Host no longer throws an exception in the embedded SQL Server Agent PS host, but is instead a no-op that emits nothing to either output or error streams. Since there is no exception, my script's Write-Message function can no longer reliably detect whether it should use Write-Host or StringBuilder.AppendLine.
The basic workaround for SQL Server Agent jobs is to use the more-mature CmdExec step type (where Write-Output and Write-Host both get captured as stdout), but I do prefer the PowerShell step type for (among other reasons) its ability to split the command reliably across multiple lines, so I am keen to see if there is a more-holistic, PowerShell-based approach to solve the problem of whether Write-Host does anything useful for the host I am in.
Just check if your host is UserInteractive or an service type environment.
$script:can_write_host = [Environment]::UserInteractive
Another way to track the output of a script in real time is to push that output to a log file and then monitor it in real time using trace32. This is just a workaround, but it might work out for you.
Add-Content -Path "C:\Users\username\Documents\PS_log.log" -Value $variablewithvalue

How to resume PowerShell script after system reboot?

I am using below powershell script to delete SharePoint Alerts.
foreach ($site in Get-SPSite -Limit All)
{
"Site Collection $site"
foreach ($web in $site.allwebs)
{
" Web $web"
$c = $web.alerts.count
" Deleting $c alerts"
for ($i=$c-1;$i -ge 0; $i--) { $web.alerts.delete($i) }
}
}
There are around million alerts in each of Dev, Test and UAT environments. It takes many hours to delete all the alerts at one go and as the servers automatically get restarted periodically, the script doesn't get executed fully.
I am aware that to resume PowerShell scripts after reboot we can use PowerShell Workflow with Checkpoint-workflow but not sure where to place checkpoints and PSPersist.
Need help to resume deleting of alerts in the above script after system reboot.
Update: After trying to implement it, I realized that SharePoint PowerShell cmdlets cannot be coupled with PowerShell Workflow. It doesn't allow
Add-PSSnapin "Microsoft.SharePoint.PowerShell"
to be added to workflows
Workflow SPAlerts
{
//Below tweaks didn't work
InlineScript{Add-PSSnapin "Microsoft.SharePoint.PowerShell"}
Invoke-Command -ScriptBlock {Add-PSSnapin "Microsoft.SharePoint.PowerShell"}
Invoke-Expression "......"
}
The MSDN documentation states:
You can place checkpoints anywhere in a workflow, including before and after each command or expression, or after each activity in a workflow. Windows PowerShell Workflow allows you to do so.
... Add a checkpoint after each significant part of the workflow completes; a part that you do not want to repeat if the workflow is interrupted.
... When the workflow definitions is complete, use the testing process to refine your checkpoints. Add and remove (comment out) checkpoints to achieve both a reasonable running time and a reasonable recovery time.
Looking at your code, I would place the checkpoint right after "Site Collection $site" if deleting alerts for the given website takes reasonable time on average. If there are just few sites each of them containing tons of alerts then I would place it on the start of the next foreach.
I would definitely not place it inside the worker foreach which deletes alerts.
I would also suggest you look at the foreach -Parallel capability of workflows to make the deleting parallel. That way even the last site/website should get it's turn, even if the server is restarted often.

Only allow 1 invocation of powershell script to run

I have a powershell script which starts 2 different Access database applications running. This in a volunteer setting and when the computer is first turned on, it can take a minute or two for the startup to complete. Sometimes the user gets impatient and clicks on the shortcut to the Powershell script more than once, causing the Access databases to start multiple times.
To solve this I thought that the first thing that the script would do would be to create a file. If the create failed due to the file already existing, it would ask the user if they wanted to continue. If yes, run the rest of the script otherwise exit. The problem is that the "catch" after the "try" isn't catching anything. How do I fix this and/or what other solutions do people have?
try {New-Item ($dbDir + "lock_file") -type file | Out-Null} # create lock_file. If it exists, another copy of this is probably running and the catch will run
catch
{
$answer = [System.Windows.Forms.MessageBox]::Show("It appears that the database is already starting. Start again?" , "Start Again" , 4)
if ($answer -eq "NO")
{Exit}
}
Catch only works for terminating errors. Cmdlets that throw errors but continue processing will not be caught by catch (Also called non-terminating errors). One way to change this behavior is to set the -erroraction of a cmdlet which should be common to most of them. In your case I would do this:
try {New-Item ($dbDir + "lock_file") -type file -ErrorAction Stop | Out-Null}
The catch block should trigger now.