TFS 2017 Build: Cannot Run PowerShell - powershell

We are using TFS 2017 and it has several builds configured. A little while ago we started getting an error on the second step, which is to run a PowerShell Script (first step is Get Sources):
2018-06-28T19:58:59.4326443Z ##[command]. 'K:\_work\3\s\BuildScripts\MainPre.ps1' -env "test"
2018-06-28T19:58:59.6236482Z ##[error]Access is denied
2018-06-28T19:58:59.6266488Z ##[section]Finishing: PowerShell Script
A build 4 hours ago worked just fine. No changes were made to the file, or the filesystem. I am waiting to hear from the network team to see if they did anything to the build account.
What could cause this error suddenly and how do I fix it? Note: I have not yet tried to turn it off and on again.

Based on the error message "##[error]Access is denied", seems it's an permission issue.
Just try below items to narrow down the issue:
Enable Clean option in Get sources step: Set Clean to
True and select Sources Directory under Clean options.
Check if the agent service account has the correct permission to
access the script.
Try to change another account which has the correct permission to
access the agent _work foler as the service account, then queue build
again.
Deploy a new agent, try it again.
If that still not work, just turn on system.debug in variable tab (set to true) to capture the logs and share here for further troubleshooting.

It looks like the powershell task runs some sort of security check when executing scripts?
I ran the powershell task in DEBUG and you can see the task runs some security work implicitly here.
This does give me access denied when i run it:
##[debug]C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
-NoLogo -Sta -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command "try { $null = [System.Security.Cryptography.ProtectedData] } catch { Write-Verbose 'Adding assemly: System.Security' ; Add-Type -AssemblyName 'System.Security' ; $null = [System.Security.Cryptography.ProtectedData] ; $Error.Clear() } ; Invoke-Expression -Command ([System.Text.Encoding]::UTF8.GetString([System.Security.Cryptography.ProtectedData]::Unprotect([System.Convert]::FromBase64String('AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAARs9EULLEBU+ppaGEeISmGgAAAAACAAAAAAADZgAAwAAAABAAAABLYbw0iUTABtaCw2PJ5KrrAAAAAASAAACgAAAAEAAAAOg6VMmANxZJSRmKjPWauqRYAAAAqDSQVtB4LtvBaujeTs1GKn4CPFrW484weBNwtJ7aujcJLWV4wBLHD9n+IEVZ6z13oyIpyxUEceTtiMKnfuO8irwX9l5DoHqlMGU6mx1Q5kou2V6ITEcl0BQAAAD1h7qvkyE8+PcdKmVKLHVpqYO4mA=='), [System.Convert]::FromBase64String('8yTvn1ZlLZGC7M3ewDzbLw=='), [System.Security.Cryptography.DataProtectionScope]::CurrentUser))) ; if (!(Test-Path -LiteralPath variable:\LastExitCode)) { Write-Verbose 'Last exit code is not set.' } else { Write-Verbose ('$LastExitCode: {0}' -f $LastExitCode) ; exit $LastExitCode }"
2018-06-30T12:44:57.8488275Z ##

For us, MCafee was blocking the powershell. once an exception was added, we were good.

While checking through the server, I noticed that the Event Viewer says Symantec SONAR was blocking the power shell scripts. After our network team added an exception for the build processes, our builds were again working as expected.

Related

Copy file from a Network Drive to a local Drive with a Jenkins Agent

So here is the situation, I am trying to automate the copy of some files that are in a network drive into a local folder on one of my servers. The task seems to be simple and when I try the code with PowerShell or with x copy in the command line both are working pretty great.
I've installed a Jenkins agent on this Windows server 2016 server and run the agent as a service. When I try to run the same code from the Jenkins agent, it is never working.
I tried starting the agent service as local system and as the windows network administrator who has all the right
I tried with PowerShell those lines :
Copy-Item -Path "\\server IP\directory\*" -Destination "D:\Directory\" -Verbose
and
Copy-Item -Path "z:\*" -Destination "D:\Directory\" -Verbose
Both return no error but did not copy the files, and when I tried the same code with x copy I just receive no file found and the file was not copied
xcopy "\\server IP\directory\*" "D:\Directory\" /f /s /h /y
xcopy "z:\*" "D:\Directory\" /f /s /h /y
With PowerShell, I also tried inserting the copy-file command into a script and only calling the script with the Jenkins agent, and it also didn't work
I am now running in a circle and wonder how are we supposed to work with the network drive with the Jenkins agent? Or what I am doing wrong ?
Note that other PowerShell code are working great locally.
I tried starting the agent service as local system and as the windows network administrator who has all the right
Local system doesn't have any network permissions by default. This is the machine account, so you would have to give the machine access to "\\server\share". It is not advisable though, because permissions should be granted on finer granularity. Also, local system has too many local rights, which Jenkins doesn't need.
I don't know what you mean by "Windows Network Administrator". It sounds like this one would also have too many rights.
Usually, one creates a dedicated Jenkins (domain) user. This user will be granted access to any network paths it needs. Then you have two options:
Always run Jenkins service as this user (easiest way).
Run Jenkins under another local user and connect to network drives as the domain user only on demand: net use \\server\share /user:YourDomain\Jenkins <Password>. This adds some security as you don't need to give any local permissions to the domain user.
Both return no error but did not copy the files
To improve error reporting, I suggest you set $ErrorActionPreference = 'Stop' at the beginning of your PowerShell scripts. This way the script will stop execution and show an error as soon as the first error happens. I usually wrap my PS scripts in a try/catch block to also show a script stack trace, which makes it much easier to pinpoint error locations:
$ErrorActionPreference = 'Stop' # Make the script stop at the 1st error
try {
Copy-Item -Path "\\server IP\directory\*" -Destination "D:\Directory\" -Verbose
# Possibly more commands...
# Indicate success to Jenkins
exit 0
}
catch {
Write-Host "SCRIPT ERROR: $($_.Exception)"
if( $_.ScriptStackTrace ) {
Write-Host ( "`t$($_.ScriptStackTrace)" -replace '\r?\n', "`n`t" )
}
# Indicate failure to Jenkins
exit 1
}
In this case the stack trace is not much helpful, but it will come in handy, when you call functions of other scripts, which in turn may call other scripts too.

PowerShell on Target Machines -TFS task, Security Warning persists after changing execution policy in remote server

I am pulling my hairs as I could not figure out what happens in powershell on target machine task (v1.0 and 2.0) in my release.
every time I run the task, it throws me the error:
AuthorizationManager check failed. ---> System.Management.Automation.PSSecurityException: AuthorizationManager check failed. ---> System.Management.Automation.Host.HostException: A command that prompts the user failed because the host program or the command type does not support user interaction. The host was attempting to request confirmation with the following message: Run only scripts that you trust. While scripts from the internet can be useful, this script can potentially harm your computer. If you trust this script, use the Unblock-File cmdlet to allow the script to run without this warning message. Do you want to run \\server\c$\Program Files\exampleps.ps1?
I understand this may relate to execution policy, so this is what I have done so far trying to solve the issue:
I went in the remote server and turned off IE enhanced security for admins, as the service account to run this script is admin
Shift+Right-click powershell to run as service account and changed execution policy from remotesigned to bypass. performed this action in both 32 and 64bit powershell. Bypass was set to local machine and current user
Added the \server\c$\Program Files\exampleps.ps1 to trusted site under internet options
I have tried to google and stackoverflow similar questions and these are what I found.
Update
After trying all 3 methods above, even when I try to run the ps script directly in console, the security warning still shows up. For some reasons, the bypass execution policy doesn't kick in. --I was able to run it in console without warnings, however, tfs task still failed
I am really frustrated and I hope anyone in the community can give me some guidance on the this.
Much appreciated.
Please try the following ways to see if they can work:
Use the "Bypass" Execution Policy Flag
PowerShell.exe -ExecutionPolicy Bypass -File \server\c$\Program Files\exampleps.ps1
Read Script from the File and Pipe to PowerShell Standard In
Get-Content \server\c$\Program Files\exampleps.ps1 | PowerShell.exe -noprofile -
Use the Invoke-Expression Command
Get-Content \server\c$\Program Files\exampleps.ps1 | Invoke-Expression
Use the "Unrestricted" Execution Policy Flag
PowerShell.exe -ExecutionPolicy UnRestricted -File \server\c$\Program Files\exampleps.ps1
There also are few other ways you can try. To view more details, you can reference to "15 Ways to Bypass the PowerShell Execution Policy".

Is it possible to get different results on `$?` on the same command?

I have a powershell script where I'm executing a node command which is meant to be executed by a TFS 2013 Build:
node "$Env:TF_BUILD_SOURCESDIRECTORY\app\Proj\App_Build\r.js" -o "$Env:TF_BUILD_SOURCESDIRECTORY\app\Proj\App_Build\build-styles.js"
$success = $?
if (!$success){
exit 1
}
When I run this script manually and the command fails $success is false and the script exits 1, but when the build executes the script and the node command fails, $success (and $?) is true.
What can change the behavior of powershell? I have no idea what else to try. So far I eliminated the following:
Changed the Build Service user to the same Admin user that executes the script manually
Tried executing the command with cmd /c node ...
Tried executing the command with Start-Process node...
Ran the Build Service interactively
Ran the build with both VSO Build Controller and an on premise Build Controller
Executed the script manually with the same command used by TFS (per the Build Log)
Thoughts?
Can we restructure this a bit so we have a better feel for what is happening? I tend to avoid $? because it is harder to debug and test with.
try
{
Write-Host "TF_BUILD_SOURCESDIRECTORY = $Env:TF_BUILD_SOURCESDIRECTORY"
$result = node "$Env:TF_BUILD_SOURCESDIRECTORY\app\Proj\App_Build\r.js" -o "$Env:TF_BUILD_SOURCESDIRECTORY\app\Proj\App_Build\build-styles.js"
Write-Host "Result = $result"
}
catch
{
Write-Error "Command failed"
Exit 1
}
Sometimes I wrap my command in a Start-Process -NoNewWindow -Wait just to see if that generates a different error message.
In your case, I would also try Enter-PSSession to get a non-interactive prompt on the TFS server. I have seen cases where powershell acts diferently when the shell is not interactive.

Powershell InitializeDefaultDrives Error always stops Team Build from performing a successful build

Whenever we start up Powershell on any machine on our network we instantly receive the following error in the console 'Attempting to perform the InitializeDefaultDrives operation on the 'FileSystem' provider failed.'
The reason for this is that upon start-up Powershell attempts to map all the drives it can find to Powershell objects, however, for some reason our Ops guys have configured all our network drives to appear as 'disconnected' even though they are fully accessible. Powershell sees these drives as 'disconnected' and throws an error. I've tried to get the Ops guys to change this but the behaviour seems to be set quite deeply in our infrastructure.
Under normal usage it isn't too much of an issue, however, we are trying to run a Powershell script (with Psake) as part of an automated build process via Team Build, and the error on start-up is picked up by the build process and causes our build process to only partially succeed, it's impossible for us to achieve a nice, green, successful build.
Our Psake-based Powershell scripts is kicked off from a simple batch file that looks like this -
cls
powershell -ExecutionPolicy Unrestricted -command "& \"%~dp0psake.ps1\"" %*
echo EXIT CODE - %ERRORLEVEL%
exit /b %ERRORLEVEL%
This batch file is called from an Invoke-Process workflow object in TeamBuild with the standard output and error output mapped to stdout and stderr respectively.
I can see a few potential areas we might be able to solve this
Find a way to stop Powershell from performing the InitializeDefaultDrives operation
Filter out that specific error in the batch wrapper somehow but still pass genuine errors back up to the build process
Parse errors in the Invoke Process workflow object so that particular error doesn't cause a failure, but all other errors still each the build process.
Any help GREATLY appreciated! : )
Eventually I found I was missing some error checking after my call to Powershell so my batch file should have looked like this...
cls
powershell -ExecutionPolicy Unrestricted -command "& \"%~dp0psake.ps1\"" %*; if ($psake.build_success -eq $false) { exit 1 } else { exit 0 }
echo EXIT CODE - %ERRORLEVEL%
exit /b %ERRORLEVEL%

Terminate PowerShell process after script executing

I have a TFS Build where I run PowerShell script. The problem is PowerShell.exe never stops after runnig and do nothing.
Script is signed by trusted sertificate and successfully runs on my BuildAgent from PowerShell and writes logs. But it don't to anything from build or from cmd.exe. PowerShell.exe just starts and do nothing.
P.S. PS script have Exit commands but it not help.
Thanks,
Roman
You can use, Stop-Process -Id $PID from within the script to end the current PowerShell process.
The issue was resolved.
Problem was security settings on BuildAgent. When I run script manually from BuildAgent user's account and choose "Run always" build starts working correctly.