In TeamCity, can I run a command-line application for the duration of a build? - command-line

I have a command-line application that I want to run in a build configuration for the duration of the build, then shut it down at the end when all other build steps have completed.
The application is best thought of as a stub server, which will have a client run against it, then report its results. After the tests, I shut down the server. That's the theory anyway.
What I'm finding is that running my stub server as a command line build step shuts down the stub server immediately before going to the next build step. Since the next build step depends on the server running, the whole thing fails.
I've also tried using the custom script option to run both tools one after another in the same step, but that results in the same thing: the server, launched on the first line, is shut down before invoking the second line of the script.
Is it possible to do what I'm asking in TeamCity? If so, how do I do it? Please list any possibilities, right up to creating a plugin (although the easier, the better).

Yes you can, you can do that in a Nant script, have Teamcity run the script, look for spawn and the nantContrib waitforexit.
However I think you would be much better off creating a mock class that the client uses only when running the tests. Instead of round tripping to the server during the build as that is can be a bit problematic, sometimes ports are closed, sometimes the server hangs from the last run, etc. That way you can run the tests, make sure the code is doing the right thing when the mock returns whatever it needs to return etc.

Related

Store results of unit test run into variables

I have a TeamCity build configuration that builds a C# project, runs some unit tests, and then does some extra things. My question is: Can I get information about my unit test run stored into build configuration variables (i.e. how many tests were run, how many were successful, how many failed, how many were skipped) so that I can then check these variables in a PowerShell script in later build steps and perform different actions depending on how many tests have passed?
AFAIK the best way is to ask these information directly to teamcity server using its REST API (pay attention, maybe the build locator could be a little be tricly to be found, if the build is still running).
By other hand, you can parse your NUnit test result file (or files if you run more than one NUnit test runner step in your build) inside your build agent machine.

How to run a foreground task through WinRM / Remote Powershelling

I'm trying to incorporate a test suite that runs nightly on an Azure VM.
As of now, I have a Build process using TFS2015 that publishes my test files, starts the VM, and copies the files.
I'm trying to then use the "PowerShell on Target Machine" task to execute a script that launches a batch file. The reason it'll be executing a batch file is because I can't have the build process wait until that script is finished (it takes around 3 hours for the tasks in the batch file to complete).
My initial logic was to have the powershell script create a task using schtasks. This part works and the task itself is created on the virtual machine, however, it never runs at the scheduled time.
The other issue is that if I manually create these tasks, the task is executed, but everything is executed in the background. I need everything to be executed in the foreground.
I'm aware that this is by design since you should not be able to run foreground processes/applications remotely since it isn't "your session".
So the question will remain, are there any work arounds to this?
I'm trying to do launch selenium webservers and then execute protractor automation tests on the virtual machine. So one batch file starts the selenium server and the second launches protractor with a defined suite. If these are ran in the background (essentially headless) all my tests break.
Any insight would be helpful, or if I need to expand on my question or provide further details please let me know. Thanks.
I'm aware that this doesn't answer your specific question, but have you looked at moving your Selenium tests into VSTS? They're officially supported in the build/release pipelines and other people are taking this approach with protractor here and elsewhere. People are also making it work with TFS.

How do you run tests from the command line?

To do this in-editor you open the automation tab, connect to the session and choose which tests to run.
How do you do it from the command line?
(NB. not compiling UnrealEngine/Engine/Build/BatchFiles/* comprehensively covers both building the application and compiling it. Specifically, given that you have code that is 100% happy to compile, how do you kick the test suite off)
--
Here's some more info, from recent testing on 4.10:
Running tests from the editor:
UE4Editor Project.uproject -ExecCmds="Automation RunTests MyTest"
Notice the absence of the -Game flag; this launches the Editor and runs the tests successfully in the editor console.
Running the game engine and using the 'popup log window':
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log
This runs the game in 'play' mode, pops up an editor window; however, the logs stop at:
LogAssetRegistry: FAssetRegistry took 0.0004 seconds to start up
...and the game never closes or executes the tests.
Running the game engine and logging to a file:
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log=Log.txt
This runs the game in 'play' mode, and then stops and never exists.
It does not appear to run any tests or log to any files.
The folder Saved/Logs does not exist after quitting the running game.
Running in the editor, test types, etc...
see: https://answers.unrealengine.com/questions/358821/hot-reload-does-not-re-compile-automation-tests.html,
Hot reload is not supported for tests; so this isn't an option.
There's also been some suggestion in various places that the test type (eg. ATF_Game, ATF_Editor) has some affect on if runs are or can be run; perhaps this is an issue to, but I've tried all kind of combinations with no success.
--
I've tried all kinds of combinations of things trying to get this working, with no success so it's time for a bounty.
I'll accept an answer which reliably:
Executes a specific test from the command line
Logs the output from that test to a file
Right, no one has any idea here or on the issue tracker.
After some serious digging through the UE4 source code, here's the actual deal, which I leave here for the next suffering soul who can't figure this out:
To run tests from the command line, and log the output and exit after the test run use:
UE4Editor.exe path/to/project/TestProject.uproject
-ExecCmds="Automation RunTests SourceTests"
-unattended
-nopause
-testexit="Automation Test Queue Empty"
-log=output.txt
-game
On OSX use UE4Editor.app/Contents/MacOS/UE4Editor.
Notice that the logs will, regardless of what you supply, ultimately be placed in:
WindowsNoEditor/TestProject/Saved/Logs/output.txt
or
~/Library/Logs/TestProject/output.txt
Notice that for mac this is outside of your project directory, in, for example, /Users/doug/Library/Logs/TestProject. (Who thought that was a good idea?)
(see https://wiki.unrealengine.com/Locating_Project_Logs#Game_Logs)
You can list automation tests using:
-ExecCmds="Automation List"
...and then parse the response to find tests to run; automation commands may be chained, for example:
-ExecCmds="Automation List, Automation RunAll"
Do you mean the in-editor command line or the Windows command line?
In the editor you can use the Automation command with parameters, e.g. Automation RunAll
In the Windows command line you can specify unreal command parameters with -ExecCmds. To run all tests in your project: UE4Editor.exe YOURPROJECT -Game -ExecCmds="Automation RunAll"
For anyone still wondering, there is a bug in the editor that make it so the test list is flushed before they are run when they are started from the command-line (be it at startup or after).
This means that the editor actually compiles a list of tests to run, which is then flushed by another part of the program. The editor then thinks that it has finished running all the test and, since there is no errors, shows that they all succeeded.
I can post how to do a fix to this if anyone is interested, but it introduce another minor bug.

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Coypu/SpecFlow acceptance tests hang when run via Jenkins job

I'm running acceptance tests on a project via SpecFlow, NUnit and Coypu (for browser automation, using the WatiN driver). The running of the tests are invoked via a powershell/psake script.
If I run these tests on my local box, they run fine. However, we have a build server on which a Jenkins job will automatically run these tests, and when run via this Jenkins job they don't execute -- they just hang.
Looking in task manager I can see there's two instances of iexplore.exe that are created when the Jenkins job runs. However after a certain point they just hang - no changes in memory usage or CPU.
nunit-agent-x86.exe and nunit-console-x86.exe are also running but mostly hung, just nunit-agent-x86.exe going up very slowly in memory.
If I kill one of the iexplore.exe processes things continue, but the SpecFlow specs all subsequently fail.
At the point of killing iexplore.exe, the following exception is in the log:
Unhandled Exception: System.Runtime.InteropServices.COMException: The remote procedure call failed. (Exception from HRESULT: 0x800706BE)
If I invoke the psake script manually when logged in to the server, the specs run OK.
This issue began to occur when I tried to use basic DI for the BrowserSession as in the gist here: https://gist.github.com/2301407
Before that I was sharing the BrowserSession via a static property of an NUnit [SetupFixture] class. Things were working mostly OK that way, except a small issue with a test involving a modal dialog not working correctly, but I wasn't sure I was doing it right so wanted to do the technique in the gist
I'm a bit lost as to what's causing the hang. Any ideas what it is or tips to track it down?
UPDATE: After switching to Firefox as the browser, and Selenium as the driver, the problem has gone away...
I got a couple of responses on the Coypu list. I've yet to test them out as everything is fine with Firefox at the moment, but in case they are of assistance to anyone else...
On Thursday, 2 August 2012 16:38:30 UTC+1, Adiel wrote:
[...]i believe that watin needs nunit to run in STA (single-threaded) which could possibly be related to your problem.
In other words, perhaps you made your tests thread safe with the static singleton browser session, but now via specflow's IOC you are
getting multiple instances due to the way nunit is running.
On Thursday, 2 August 2012 16:41:11 UTC+1, Matt Ellis wrote:
This sounds like it's Internet Explorer's protected mode getting in the way. IE runs different zones, such as Internet and Intranet
(and about:blank) in different processes, and IIRC WatiN doesn't
handle that very well. If you can disable protected mode on your
server, you should be fine.