I am building an app using ActionScript3 with Flash Builder 4 as my IDE.
The IDE supports a unit testing framework called "FlexUnit".
I can build and run tests within the IDE, no problem.
After much pain and suffering I figured out how to build the unit tests as a swf from the command line. I can point a browser or flash player at the swf and the tests run.
But for an automated build system this is no good: I would like to build the tests, run them, and collect/analyze the results to tell which tests, if any, are failing.
I can imaging some hackery: hack FlexUnit base libraries to dump output to stderr instead of just to the IDE console. Hack some script together that points a browser at the swf, counts to 60, kills the browser and checks stderr.
But that's hideous.
I have to believe there's some way to build and run from the command line that works nicely with automated build systems.
Further complication: I am a relative noob with ActionScript (~1 month). My background is C++, makefiles, etc. All the stuff I had to do to get the tests even to build outside the ide (a build.xml file, ant) was complete greek to me, just cut n pasting from examples I could find.
As far as I'm aware your only options for running the swf are in the browser or in the standalone player. Running in the player should not be a problem for your continuous integration environment as long as you can get at the test results and exit the application.
To print test results to stdout you need to add a Text listener to your testunit core instance.
core.addListener( TextListener.getDefaultTextListener( LogEventLevel.DEBUG ) );
To exit the application after the tests have run...
System.exit(0);
For example, your top level mxml file might look like this...
<?xml version="1.0" encoding="utf-8"?>
<mx:Application
xmlns:mx="http://www.adobe.com/2006/mxml"
creationComplete="runMe()"
xmlns:adobe="http://www.adobe.com/2009/flexUnitUIRunner"
>
<mx:Script>
<![CDATA[
import org.flexunit.runner.FlexUnitCore;
//import org.flexunit.listeners.UIListener;
//import org.flexunit.listeners.CIListener;
import org.flexunit.internals.TextListener;
import mx.logging.LogEventLevel;
import flash.system.System
import unit_tests.TestAuthentication.TestAuthentication
private var core:FlexUnitCore;
public function runMe():void {
core = new FlexUnitCore();
//core.addListener(new UIListener(uiListener));
//core.addListener(new CIListener());
core.addListener( TextListener.getDefaultTextListener( LogEventLevel.DEBUG ) );
core.run( TestAuthentication );
System.exit(0);
}
]]>
</mx:Script>
</mx:Application>
Then all you need to do is parse the output.
It's not as elegant as we might like but it should work.
This post have a solution : http://devnet.jetbrains.com/message/5507979#5507979 . Works for me like a champ.
Related
I have written some libraries which is in groovy.
My SOAP UI scripts which is currently used for API automation is using these libraries. As there is no debug option in SOAP UI Pro It is very hard to find the failures. Can someone help to debug the groovy script from eclipse. Which is called internally by a SOAP UI Script
Here is the way I get it done:
Instead of writing the logic in a groovy script using soapUI script editor, create groovy/java (user choice) class and its methods for the same logic. Here I assume that the script would have relative lots of lines code than fewer lines.
This has couple of advantages:
Intelli sense (which is not available if you write the same in soapUI tool)
Formatting of code
Easy debug
Maintenance of the code would be simple
Have a groovy/java project in the IDE of your choice (Intellij suits better for groovy projects, personal view only). Have the logic in the form of classes / methods. Compile those classes and create a jar file. Place it under SOAPUI_HOME/bin/ext directory.
Edit the soapui invoking script(SOAPUI_HOME/bin/soapui.sh on unix or .bat on windows) and add the debug parameters in JAVA_OPTS say
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6006.
In the groovy script, just instantiate the above created class and call the appropriate method. Use arguments to your methods, that are available in groovy script context, log, testRunner etc variables. Even the script is done with fewer lines.
Debugging In Action:
In your IDE, configure remote debugging and add your debug points where it is needed. And start debug.
Now, just run the groovy script. Go to IDE, it should stop at the point where you added the debug point. You should be to do run through it normally like how you do with java projects in your IDE.
This works best for me.
EDIT:
Of course, this requires programming knowledge, know working in IDE (assuming that user knows as per the question) configuring build/class path etc.
Can't be done. SmartBear has been talking about this since at least 2007 (when SoapUI was still owned by Eviware), but still has not delivered. Here is one source: http://community.smartbear.com/t5/SoapUI-NG/Debugging-Groovy-scripts/td-p/33995
To do this in-editor you open the automation tab, connect to the session and choose which tests to run.
How do you do it from the command line?
(NB. not compiling UnrealEngine/Engine/Build/BatchFiles/* comprehensively covers both building the application and compiling it. Specifically, given that you have code that is 100% happy to compile, how do you kick the test suite off)
--
Here's some more info, from recent testing on 4.10:
Running tests from the editor:
UE4Editor Project.uproject -ExecCmds="Automation RunTests MyTest"
Notice the absence of the -Game flag; this launches the Editor and runs the tests successfully in the editor console.
Running the game engine and using the 'popup log window':
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log
This runs the game in 'play' mode, pops up an editor window; however, the logs stop at:
LogAssetRegistry: FAssetRegistry took 0.0004 seconds to start up
...and the game never closes or executes the tests.
Running the game engine and logging to a file:
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log=Log.txt
This runs the game in 'play' mode, and then stops and never exists.
It does not appear to run any tests or log to any files.
The folder Saved/Logs does not exist after quitting the running game.
Running in the editor, test types, etc...
see: https://answers.unrealengine.com/questions/358821/hot-reload-does-not-re-compile-automation-tests.html,
Hot reload is not supported for tests; so this isn't an option.
There's also been some suggestion in various places that the test type (eg. ATF_Game, ATF_Editor) has some affect on if runs are or can be run; perhaps this is an issue to, but I've tried all kind of combinations with no success.
--
I've tried all kinds of combinations of things trying to get this working, with no success so it's time for a bounty.
I'll accept an answer which reliably:
Executes a specific test from the command line
Logs the output from that test to a file
Right, no one has any idea here or on the issue tracker.
After some serious digging through the UE4 source code, here's the actual deal, which I leave here for the next suffering soul who can't figure this out:
To run tests from the command line, and log the output and exit after the test run use:
UE4Editor.exe path/to/project/TestProject.uproject
-ExecCmds="Automation RunTests SourceTests"
-unattended
-nopause
-testexit="Automation Test Queue Empty"
-log=output.txt
-game
On OSX use UE4Editor.app/Contents/MacOS/UE4Editor.
Notice that the logs will, regardless of what you supply, ultimately be placed in:
WindowsNoEditor/TestProject/Saved/Logs/output.txt
or
~/Library/Logs/TestProject/output.txt
Notice that for mac this is outside of your project directory, in, for example, /Users/doug/Library/Logs/TestProject. (Who thought that was a good idea?)
(see https://wiki.unrealengine.com/Locating_Project_Logs#Game_Logs)
You can list automation tests using:
-ExecCmds="Automation List"
...and then parse the response to find tests to run; automation commands may be chained, for example:
-ExecCmds="Automation List, Automation RunAll"
Do you mean the in-editor command line or the Windows command line?
In the editor you can use the Automation command with parameters, e.g. Automation RunAll
In the Windows command line you can specify unreal command parameters with -ExecCmds. To run all tests in your project: UE4Editor.exe YOURPROJECT -Game -ExecCmds="Automation RunAll"
For anyone still wondering, there is a bug in the editor that make it so the test list is flushed before they are run when they are started from the command-line (be it at startup or after).
This means that the editor actually compiles a list of tests to run, which is then flushed by another part of the program. The editor then thinks that it has finished running all the test and, since there is no errors, shows that they all succeeded.
I can post how to do a fix to this if anyone is interested, but it introduce another minor bug.
I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.
I noticed on Can't get Zend Studio and PHPunit to work together that a comment says
If you want ZS to run the PHPunit bootstrap, you have to specifically
select the file PHPunit.xml and tell it to run as PHPunit test. If you
just select an individual test and run as PHPunit test, the bootstrap
will not be run
That trick actually helped me to be able to run unit tests at all. However as my unit tests grow, it's becoming more and more painful to have to run the entire test suite when I just need to run my most recently written test.
So I need help adding the necessary code to the unit test (presumably in setUp) so that both the tests/bootstrap.php and the regular bootstrap for the whole application run. Hopefully this would let me do right click -> run as -> PHPUnit Test on an individual test file.
I'm new to Zend / Zend Studio so please keep answers on a basic level. The current setUp() function for one of my tests is the following, which I believe runs the whole app's boostrap:
public function setUp() {
$this->bootstrap = new Zend_Application(APPLICATION_ENV, APPLICATION_PATH . '/configs/application.ini');
parent::setUp();
}
How does this need to change to enable running just this test file in isolation? (which I think involves calling both the tests/bootstrap.php and the application bootstrap as above)
You have to setup phpunit with zend studio / eclipse first, then you can run each unit test file individually in your IDE console.
Here's a tut that might help
Nunit works quite well with CruiseControl.NET, but there is one thing that irritates me a lot.
If there is a test that causes Nunit to crash, I would only get little information about the crash because the XML report of Nunit doesn't get a chance to be created and be merged into the CruiseControl report.
I need a way to report the progress even when Nunit crashes during the execution.
I have been tried to force each test to output some information to the console to resolve this problem. I have thought about using SetUp method, but I haven't found any good way to get the name of the current running test.
I think a better answer would be to create an NUnit Add-in that implements EventListener interface to capture the TestStarted event to output the progress to the console or a file.
The EventListener interface is documented on NUnit website: http://nunit.org/index.php?p=eventListeners&r=2.5
In addition, we can make the Dashboard report better even when NUnit crashes during its execution. We can use the following procedure to ensure that the DashBoard always shows something about the tests.
Run tests with the EventListener which outputs the progress to a separate file
After running tests, use another program to check the file
If the file does not contain a specific "end line", generate a special XML report based on the file and merge it into the CruiseControl log
If getting the name of the current running test is what you're after you could grab it with the following:
using System.Diagnostics;
...
[Test]
public void SomeTestThatWillCrash()
{
StackFrame sf = new StackFrame();
Console.WriteLine("Now running method: " + sf.GetMethod().Name);
...
}
CruiseControl.net recommends that you use NUnit through your builder (i.e. NAnt/MSBuild). See here: http://confluence.public.thoughtworks.org/display/CCNET/NUnit+Task. As they describe - it will allow you to run these tests locally first - which should give you an exception that you can clear up.
That being said - are your developers running these unit tests prior to checking in code? That could ease this issue. If its an integration issue - I would suggest grabbing the latest code base and running the tests locally to see what is out of sorts.
I don't know if NUnit is able to create the results file even when it crashes. Even if it did - you could run into problems if that file is not well formed due to the crash.
You could use #jpoh's approach but do it in the TestSetup method which would require you do it per-fixture. If really needed, you could write a base class that all your test fixtures inherit from that implement this method.
Another solution is to use MSBuild to run NUnit and use the task in the MSBuildCommunityTasks library. This allows you to continue on error and also get the error code back from NUnit. You won't get what method caused the problem, but might help some. Here is my MSBuild target:
<Target Name="UnitTest"
DependsOnTargets="BuildIt">
<NUnit Assemblies="#(TestAssemblies)"
ToolPath="$(NUnitx86Path)"
WorkingDirectory="%(TestAssemblies.RootDir)%(TestAssemblies.Directory)"
OutputXmlFile="#(TestAssemblies->'%(FullPath).$(NUnitFile)')"
Condition="'#(TestAssemblies)' != ''"
ExcludeCategory="$(ExcludeNUnitCategories)"
ContinueOnError="true">
<Output TaskParameter="ExitCode" ItemName="NUnitExitCodes"/>
</NUnit>
<!-- Copy the test results for the CCNet build before a possible build failure (see next step) -->
<CallTarget Targets="CopyTestResults" Condition="'#(TestAssemblies)' != ''"/>
<Error Text="Test error(s) occured" Code="%(NUnitExitCodes.Identity)" Condition=" '%(NUnitExitCodes.Identity)' != '0' And '#(TestAssemblies)' != ''"/>
</Target>
This probably won't fit your needs as is, but is something to try out and play with.
That said, I would agree with #rifferte that it sounds like you need to debug the problem locally and not rely on CC.NET to handle the reporting.