What is the best way to execute Appium server in Azure pipelines console using Microsoft hosted agent? Or is it even feasible? I´m using vmImage: 'vs2017-win2016'
I´ve managed to start the server with Cmdline and powershell script but I´m unsure if the server is actually running in the background.
So far I´ve tried -
Plain Cmdline script. "appium -p 4723". This starts the server but stays in the Cmdline job forever.
Plain Cmdline script with start call. "start appium -p 4723". I believe this would start Appium in another console, but I´m usure if the server keeps running in the background. Cmdline job passes to the next one. I don´t see the standard purple Appium server launch commands.
Powershell Start-Process command. "Start-Process appium -p 4723". Same impact as with 2.
Is there a way to verify that appium server exists?
I´m trying to achieve mobile app automated testing using Azure Repo&Pipelines, Robot Framework, Appium and Android studio emulator. For this purpose, I need Appium server running in the background. I´m doing the needed tool installations in my pipeline before entering to Appium running stage.
Is there a way to verify that appium server exists?
Try command like TASKLIST /FI "IMAGENAME eq cmd.exe" /V.
For me, I use three cmd scripts (CMD task) to do the test, task1 => npm install -g appium, task2 => start appium -p 4723, task3 => TASKLIST /FI "IMAGENAME eq cmd.exe" /V. And here's the output of task3:
Since the cmd.exe will be killed when the task is done, so cmd.exe(task1) and cmd.exe(task2) doesn't exist.
Till the third task, only the newly created console (appium -p 4723) and the cmd.exe(task3) do exist. And the appium -p 4723 is what you want, it do exist in background if we don't kill it.
Note:
1.I used the second command start appium -p 4723 to test. It's expected behavior that your first command would start the server but it stays in the Cmdline job forever. Azure Devops pipeline won't go to next task, if current task hasn't completed. So if the cmd task is one listener that keeps running, it stays in the Cmdline task forever till timeout.
Second command is better in Azure Devops pipeline environment.
2.What you use is microsoft-hosted agent, it will be recovered when the pipeline is completed. So your appium listener will be killed after the the job/pipeline completes.
3.About Appium test in Azure Devops, you can check this document.
with powershell command, I can start appium, try below code:
- powershell: Start-Process appium -PassThru 4723 displayName: "start appium process"
After starting the appium server in azure devops, it cant execute the next task. I want after starting the server, next task will be executed.
Related
I do have following setup:
a win PC with gitlab-runner installed (working)
a powershell script running on the same PC is starting an application
a gitlab server to connect this local PC and starting the powershell script
Now when starting the powershell script directly from the local PC, the application starts and terminates after done - working as expected. When starting the same powershell script with the gitlab server (yml-file) then I can see that the application has been started (new process in taskmanager) but it is not running as well it never terminates.
When manually end the task I see that gitlab terminates again.
Question:
what could be the root cause?
is it possible to run the powershell script with gitlab-runner? I think there is a way with the command "exec". How does the command looks like when calling the powershell script?
is it possible to run the application not in the background in order to see whats going on?
others?
thanks in advance
I think there is a bug with the gitlab runner on windows.
No matter which shell you configure in the config.toml the runner
will always use cmd.exe for an exec local run.
Specify the --shell argument to override the default cmd.exe shell:
> gitlab-runner exec shell your_job --shell pwsh
If you run this locally in your project, it outputs to .builds/, so add this to your .gitignore because git will see it and think you might want to add a submodule.
I have a powershell build script that I am executing from Gitlab CI Pipelines.
When run manually (on the build server) the build script runs fine, but when executed by the Gitlab CI runner it:
Times out after an hour (runs for about 20 mins if run manually)
Does not echo Write-Output statements into the build log
So there is something going wrong when executed from Gitlab CI. However, as the Write-Output statements aren't displayed in the Build Log there is no real way to troubleshoot this.
What do I need to do to get the Write-Output statements to display in the build log? I would have assumed any STDOUT messages would show there, but they're not coming through.
The answer here was to set PowerShell as the shell to use in the gitlab runner.
This is done by adding the following line to the gitlab runners config.toml file:
shell = "powershell"
Now the file executes correctly and Write-Output statements are echo'ed in the build log.
Trying to set up a jenkins v2.46.3 slave on windows 2016 server to run a batch file.
It looks like it is working but the batch file does not actually run. The script does not generate the expected log file and nothing shows up in task manager on the slave
The console output of the job looks like this:
Started by user xxx
Building remotely on xxx-Windows (windows) in workspace c:\Jenkins\workspace\xxx
[xxx] $ cmd /c call C:\Windows\TEMP\hudson4948156451026881586.bat
c:\Jenkins\workspace\xxx>C:\QA\xxx\Perl\Tests\runxxxTests.bat
c:\Jenkins\workspace\xxx>cd C:\QA\xxx\Perl\Tests
C:\QA\xxx\Perl\Tests>runxxxTests.pl -f test_suite_test.list
Finished: SUCCESS
If I run the batch file manually it works as expected.
There do not seem to be any errors. How can I troubleshoot this further ?
The fix to this was changing the log on rights of the Jenkins and Jenkins Agent services on the windows slave from LocalSystem to a privileged account.
I have a PowerShell task in my definition that calls another script file on its own which takes care of running several things on my build agent (starts several different processes) - emulators, node.js applications, etc.
Everything is fine up until the moment this step is done and the run continues. All of the above mentioned stuff gets closed with most of the underlying processes killed, thus, any further execution (e.g. tests run) is doomed to fail.
My assumption is that these processes are somehow dependent on the outermost (temporary) script that VSTS generates to process the step.
I tried with the -NoExit switch specified in the arguments list of my script, but to no avail. I've also read somewhere a suggestion to set this by default with a registry key for powershell.exe - still nothing.
The very same workflow was okay in Jenkins. How can I fix this?
These are the tasks I have:
The last PowerShell task calls a specified PowerShell file which calls several others on its own. They ensure some local dependencies and processes needed to start executing the tests, e.g. a running Node.js application (started in a separate console for example and running fine).
When the task is done and it is successful, the last one with the tests would fail because the Node.js application has been shut down as well as anything else that was started within the previous step. It just stops everything. That's why I'm currently running the tests within the same task itself until I find out how to overcome this behavior.
I am not sure how you call the dependencies and applications in your PowerShell script. But I tried with the following command in PowerShell script task to run a Node.js application:
invoke-expression 'cmd /c start powershell -Command {node main.js}'
The application keeps running after the PowerShell script task is passed and finished which should meet your requirement. Refer to this question for details: PowerShell launch script in new instance.
But you need to remember to close the process after the test is finished.
There is the Continue on error option (Control Options section). The build process will be continued if it is true (checked), but the build result will be partially succeeded.
You also can output the error or warning by using PowerShell or VSTS task commands (uncheck Fail on Standard Error option in the Advanced section) and terminate the current PowerShell process by using the exit keyword, for example:
Write-Warning “warning”
Write-Error “error”
Write-Host " ##vso[task.logissue type=warning;]this is the warning"
Write-Host " ##vso[task.logissue type=error;sourcepath=consoleapp/main.cs;linenumber=1;columnnumber=1;code=100;]this is an error "
More information about the VSTS task command, you can refer to: Logging Commands
We use TeamCity, nant and psexec to run a command on a remote machine as part of the release packaging. Everything works fine when I run the nant from the console but when running from teamcity psexec hangs (freezes) 50% of the times.
I looked through many forums and there seems to be workarounds that increase complexity of the call and involve loosing the output and the errorcode of the command.
Does anyone know an easier way to run a command on a remote machine?
I don't mind setting up some application on the remote machine, like a telnet server, any advices on what to do?
Thanks
I have solved this issue with a combination of RemCom and a custom MSBuild task called ExecParse.
RemCom, because it doesn't do odd things with STDOUT (thus hanging the build). We used, and ExecParse to capture the output of the remote task, and parse the Exit Code from the output, because the standard MSBuild Exec task does not capture output. Some NAnt equivalent that captures the output would work.
I've detailed this in a blog post: "Continuous Integration: Executing Remote Tasks with TeamCity, MSBuild, RemCom, and ExecParse"
PsExec does some funky things with the standard input/output, and invoking this from Java (which TeamCity is built on) raises all kinds of problems and stability issues. psexec -d did not work wither.
I solved it by using Powershell in Team City.
The script below stops an IIS 7 ApplicationPool on a remote server:
[string]$HostName = "myWebServer"
[string]$Cmd = "C:\Windows\System32\inetsrv\appcmd.exe stop apppool MyMainAppPool”
Invoke-WmiMethod -class Win32_process -name Create -ArgumentList ($Cmd) -ComputerName $HostName
More about it on my blog: http://blog.degree.no/2012/03/executing-commands-and-programs-on-a-remote-machine-using-powershell/
How about putting a (nant) time-out on the psexec and repeat the call until no time-out happens?
I use PSExec with the -d option (don't wait for it to finish) and capture the return code. The return code when you used -d is the process ID of the process running on the remote system. then I use PSList to poll the remote system for the process ID until I don't find it on the remote system any longer.
What happens if you setup TeamCity build agent on remote machine and let it perform the operation locally, passing it the binaries with "Artifact Dependencies"?