I want to run a Powershell file from the Jenkins Pipeline, using the powershell command. Should be easy, right?
node()
{
stage ('Hello World') {
powershell returnStatus: true, script: 'C:\\HelloWorld.ps1'
}
}
C:\HelloWorld.ps1 is a one-liner:
Write-Host "Hello World"
But running the job causes the process to hang. Here is the console output:
Started by user Administrator
[Pipeline] node
Running on master in C:\Jenkins\workspace\HelloWorld
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Hello World)
[Pipeline] powershell
[HelloWorld] Running PowerShell script
And then it just hangs until I cancel the job.
What to do?
I had the same issue and managed to solve it with downgrading DurableTask plugin from 1.17 to 1.15. It seems to be part of JENKINS-46496 bug. The older plugin version is there 1.15.
You can update the plugin to a needed version in Manage Plugins->Advanced Tab->Upload Plugin button.
I had the same issue and the problem was that the job had spaces in its name.
This issue should be fixed in the next release: https://github.com/jenkinsci/durable-task-plugin/pull/51
I have the same problem in about 70 % of my cases. I have a pipeline script that kicks off about 20 parallel jobs spread out over different agents. Each of these tries to run a Powershell script initially. About 30 % of the jobs succeed to run the script, the rest hangs just like it does for Ola.
The build agents are configured exactly the way (clones in a VM cluster). Powershell v4 is used. Upon several tries one single build agent sometimes succeeds in running Powershell, sometimes don't.
I've been running this pipeline for 5-6 weeks and it's just the last week this behavior has been seen. Fortunately the script isn't in production yet :-)
I haven't had the time to do a full investigation. Could it be that some of the pipeline plugins have been updated and introduced this bug?
Related
What is the best way to execute Appium server in Azure pipelines console using Microsoft hosted agent? Or is it even feasible? I´m using vmImage: 'vs2017-win2016'
I´ve managed to start the server with Cmdline and powershell script but I´m unsure if the server is actually running in the background.
So far I´ve tried -
Plain Cmdline script. "appium -p 4723". This starts the server but stays in the Cmdline job forever.
Plain Cmdline script with start call. "start appium -p 4723". I believe this would start Appium in another console, but I´m usure if the server keeps running in the background. Cmdline job passes to the next one. I don´t see the standard purple Appium server launch commands.
Powershell Start-Process command. "Start-Process appium -p 4723". Same impact as with 2.
Is there a way to verify that appium server exists?
I´m trying to achieve mobile app automated testing using Azure Repo&Pipelines, Robot Framework, Appium and Android studio emulator. For this purpose, I need Appium server running in the background. I´m doing the needed tool installations in my pipeline before entering to Appium running stage.
Is there a way to verify that appium server exists?
Try command like TASKLIST /FI "IMAGENAME eq cmd.exe" /V.
For me, I use three cmd scripts (CMD task) to do the test, task1 => npm install -g appium, task2 => start appium -p 4723, task3 => TASKLIST /FI "IMAGENAME eq cmd.exe" /V. And here's the output of task3:
Since the cmd.exe will be killed when the task is done, so cmd.exe(task1) and cmd.exe(task2) doesn't exist.
Till the third task, only the newly created console (appium -p 4723) and the cmd.exe(task3) do exist. And the appium -p 4723 is what you want, it do exist in background if we don't kill it.
Note:
1.I used the second command start appium -p 4723 to test. It's expected behavior that your first command would start the server but it stays in the Cmdline job forever. Azure Devops pipeline won't go to next task, if current task hasn't completed. So if the cmd task is one listener that keeps running, it stays in the Cmdline task forever till timeout.
Second command is better in Azure Devops pipeline environment.
2.What you use is microsoft-hosted agent, it will be recovered when the pipeline is completed. So your appium listener will be killed after the the job/pipeline completes.
3.About Appium test in Azure Devops, you can check this document.
with powershell command, I can start appium, try below code:
- powershell: Start-Process appium -PassThru 4723 displayName: "start appium process"
After starting the appium server in azure devops, it cant execute the next task. I want after starting the server, next task will be executed.
I have an Azure Pipeline Build. The *.yaml file executes correctly a Python script (PythonScript#0). This script itself creates (if does not exist), executes and publishes Azure ML pipeline. It runs well when the Build is executed manually or is triggered by commits.
But I want to schedule the automated execution of the ML pipeline (Python script) on a daily basis.
I tried the following approach:
pipeline_id = published_pipeline.id
recurrence = ScheduleRecurrence(frequency="Day", interval=1)
recurring_schedule = Schedule.create(ws,
name=<schedule_name>,
description="Title",
pipeline_id=pipeline_id,
experiment_name=<experiment_name>,
recurrence=recurrence)
In this case the pipeline runs during 3-4 seconds and terminates successfully. However, the Python script is not executed.
Also, I tried to schedule the execution of a pipeline using Build, but I assume that it is a wrong approach. It rebuilds a pipeline, but I need to execute a previously published pipeline.
schedules:
- cron: "0 0 * * *"
displayName: Daily build
always: true
How can I execute my published pipeline daily? Should I use Release (which agents, which tasks?)?
Also, I tried to schedule the execution of a pipeline using Build, but
I assume that it is a wrong approach. It rebuilds a pipeline, but I
need to execute a previously published pipeline.
Assuming your python-related task runs after many other tasks, then it's not recommended to simply schedule the whole build pipeline, it will rerun the pipeline(other tasks+python script).
Only the pipeline can be scheduled the instead of tasks, so I suggest you can create a new build pipeline to run the python script. Also, a private agent is more suitable for this scenario.
Now we get two pipelines: Original A and B which used to run the python script.
Set B's build completion to be A, so that if A builds successfully the first time, B will run after that.
Add a command-line task or PS task as pipeline A's last task. This task(modify the yml and then push the change) will be responsible for updating the B's corresponding xx.yml file to schedule B.
In this way, if A(other tasks) builds successfully, B(pipeline to run python script) will execute. And B will run daily after that successful build.
Hope it helps and if I misunderstand anything, feel free to correct me.
I have a powershell build script that I am executing from Gitlab CI Pipelines.
When run manually (on the build server) the build script runs fine, but when executed by the Gitlab CI runner it:
Times out after an hour (runs for about 20 mins if run manually)
Does not echo Write-Output statements into the build log
So there is something going wrong when executed from Gitlab CI. However, as the Write-Output statements aren't displayed in the Build Log there is no real way to troubleshoot this.
What do I need to do to get the Write-Output statements to display in the build log? I would have assumed any STDOUT messages would show there, but they're not coming through.
The answer here was to set PowerShell as the shell to use in the gitlab runner.
This is done by adding the following line to the gitlab runners config.toml file:
shell = "powershell"
Now the file executes correctly and Write-Output statements are echo'ed in the build log.
I'm completely new to Bamboo, so thank you in advance for the help.
I'm trying to create a Bamboo Run that zips files from a git repo and uploads it to Artifactory. Currently my build contains 2 tasks - source code checkout and a simple powershell script. The first time I run it it builds perfectly fine, but without any modifications any consecutive runs fail.
The error I'm getting in the log is the following:
Failing task since return code of [powershell -ExecutionPolicy bypass -Command /bin/sh /opt/bamboo/agent/temp/OR-J8U-JOB1-4-ScriptBuildTask-539645121146088515.ps1] was -1 while expected 0
Replacing the powershell script with empty space does not resolve the issue - only removing the script completely allows the build to succeed, but I cannot reinsert a new script or it will fail. I read other online questions suggesting that I "merge the user-level PATH environment information in to the system-level PATH" but I cannot find the user-level environment information, my environmental variables section is completely empty.
Like Vlad, I found that it was more efficient to implement my powershell script with batch.
I have a PowerShell task in my definition that calls another script file on its own which takes care of running several things on my build agent (starts several different processes) - emulators, node.js applications, etc.
Everything is fine up until the moment this step is done and the run continues. All of the above mentioned stuff gets closed with most of the underlying processes killed, thus, any further execution (e.g. tests run) is doomed to fail.
My assumption is that these processes are somehow dependent on the outermost (temporary) script that VSTS generates to process the step.
I tried with the -NoExit switch specified in the arguments list of my script, but to no avail. I've also read somewhere a suggestion to set this by default with a registry key for powershell.exe - still nothing.
The very same workflow was okay in Jenkins. How can I fix this?
These are the tasks I have:
The last PowerShell task calls a specified PowerShell file which calls several others on its own. They ensure some local dependencies and processes needed to start executing the tests, e.g. a running Node.js application (started in a separate console for example and running fine).
When the task is done and it is successful, the last one with the tests would fail because the Node.js application has been shut down as well as anything else that was started within the previous step. It just stops everything. That's why I'm currently running the tests within the same task itself until I find out how to overcome this behavior.
I am not sure how you call the dependencies and applications in your PowerShell script. But I tried with the following command in PowerShell script task to run a Node.js application:
invoke-expression 'cmd /c start powershell -Command {node main.js}'
The application keeps running after the PowerShell script task is passed and finished which should meet your requirement. Refer to this question for details: PowerShell launch script in new instance.
But you need to remember to close the process after the test is finished.
There is the Continue on error option (Control Options section). The build process will be continued if it is true (checked), but the build result will be partially succeeded.
You also can output the error or warning by using PowerShell or VSTS task commands (uncheck Fail on Standard Error option in the Advanced section) and terminate the current PowerShell process by using the exit keyword, for example:
Write-Warning “warning”
Write-Error “error”
Write-Host " ##vso[task.logissue type=warning;]this is the warning"
Write-Host " ##vso[task.logissue type=error;sourcepath=consoleapp/main.cs;linenumber=1;columnnumber=1;code=100;]this is an error "
More information about the VSTS task command, you can refer to: Logging Commands