To do this in-editor you open the automation tab, connect to the session and choose which tests to run.
How do you do it from the command line?
(NB. not compiling UnrealEngine/Engine/Build/BatchFiles/* comprehensively covers both building the application and compiling it. Specifically, given that you have code that is 100% happy to compile, how do you kick the test suite off)
--
Here's some more info, from recent testing on 4.10:
Running tests from the editor:
UE4Editor Project.uproject -ExecCmds="Automation RunTests MyTest"
Notice the absence of the -Game flag; this launches the Editor and runs the tests successfully in the editor console.
Running the game engine and using the 'popup log window':
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log
This runs the game in 'play' mode, pops up an editor window; however, the logs stop at:
LogAssetRegistry: FAssetRegistry took 0.0004 seconds to start up
...and the game never closes or executes the tests.
Running the game engine and logging to a file:
UE4Editor Project.uproject -Game -ExecCmds="Automation RunTests MyTest" -log=Log.txt
This runs the game in 'play' mode, and then stops and never exists.
It does not appear to run any tests or log to any files.
The folder Saved/Logs does not exist after quitting the running game.
Running in the editor, test types, etc...
see: https://answers.unrealengine.com/questions/358821/hot-reload-does-not-re-compile-automation-tests.html,
Hot reload is not supported for tests; so this isn't an option.
There's also been some suggestion in various places that the test type (eg. ATF_Game, ATF_Editor) has some affect on if runs are or can be run; perhaps this is an issue to, but I've tried all kind of combinations with no success.
--
I've tried all kinds of combinations of things trying to get this working, with no success so it's time for a bounty.
I'll accept an answer which reliably:
Executes a specific test from the command line
Logs the output from that test to a file
Right, no one has any idea here or on the issue tracker.
After some serious digging through the UE4 source code, here's the actual deal, which I leave here for the next suffering soul who can't figure this out:
To run tests from the command line, and log the output and exit after the test run use:
UE4Editor.exe path/to/project/TestProject.uproject
-ExecCmds="Automation RunTests SourceTests"
-unattended
-nopause
-testexit="Automation Test Queue Empty"
-log=output.txt
-game
On OSX use UE4Editor.app/Contents/MacOS/UE4Editor.
Notice that the logs will, regardless of what you supply, ultimately be placed in:
WindowsNoEditor/TestProject/Saved/Logs/output.txt
or
~/Library/Logs/TestProject/output.txt
Notice that for mac this is outside of your project directory, in, for example, /Users/doug/Library/Logs/TestProject. (Who thought that was a good idea?)
(see https://wiki.unrealengine.com/Locating_Project_Logs#Game_Logs)
You can list automation tests using:
-ExecCmds="Automation List"
...and then parse the response to find tests to run; automation commands may be chained, for example:
-ExecCmds="Automation List, Automation RunAll"
Do you mean the in-editor command line or the Windows command line?
In the editor you can use the Automation command with parameters, e.g. Automation RunAll
In the Windows command line you can specify unreal command parameters with -ExecCmds. To run all tests in your project: UE4Editor.exe YOURPROJECT -Game -ExecCmds="Automation RunAll"
For anyone still wondering, there is a bug in the editor that make it so the test list is flushed before they are run when they are started from the command-line (be it at startup or after).
This means that the editor actually compiles a list of tests to run, which is then flushed by another part of the program. The editor then thinks that it has finished running all the test and, since there is no errors, shows that they all succeeded.
I can post how to do a fix to this if anyone is interested, but it introduce another minor bug.
Related
I've just started using the tests functionality in the Python extension. I want to debug my test however when I hit the debug button in the tests extension the debugger runs but doesn't stop on a bug and display a red box with the error - it just reports a failed test in the debug console. This means I can't examine the variables that caused the error.
I've tried just running the normal debugger (not from within the tests section) on the test file but the same thing happens. I've tried the normal debugger on a non-test file and it works fine.
Is the test debugger supposed to work in the same way as the normal debugger, i.e. stop on a bug?
Edit:
Worth me mentioning that the bug is occurring in the function I'm testing rather than the test file itself
Edit_2:
I've tested break points and they seem to be working ok. Can't get a conditional one to work though
I'm working with Stainless, a software verifier for Scala programs. I would like to debug the verification process of a sample programme on Intellij Idea. On a previous post, I solved this integration problem for an interactive theorem prover. But now, I'm facing two problems:
Apparently, the verification software runs at compile time. That is, I enter in the sbt console and run the compile command and then the verification process seems to be done. You may try this with this verified example. This situation is new to me, since I was used to debug the program while executing.
All the setup in the sbt files of the example above (see for instance this file) seem to refer to online content, while I want to make sure that I work with my local copy forked from the original repository of the verifier.
None of the configurations I tried worked. Can you help me out of this problem?
Details
This is the current configuration page of stainless.
If the verification runs within the sbt process, you can debug it by attaching the debugger to sbt. IntelliJ makes this easy with the embedded sbt shell:
open the sbt shell toolwindow
click the "attach debugger to sbt shell" button on the left
set breakpoints in your code
run the task
I am using a powershell script that internally calls msbuild to build my solutions. This works in principle, so the solution files are ok.
I can repeat the build, it works flawlessly.
But the build hangs
the first time I start the script (after reboot)
after some time / actions during the work day, no idea what changes
So my suspicion is, that msbuild is using some component that is not loaded when I reboot / that is unloaded during the work.
But I have no clue how to find the problem...
I am using this exe:
C:\Program Files (x86)\MSBuild\14.0\bin\MsBuild.exe
Any ideas?
For anybody running into this issue: Roslyn does a "shared compilation" by default, which means that results of compiles are used for further compilations to gain speed. You can switch this of by providing "False" for UseSharedCompilation in VBPROJ files or using a similar switch for MsBuild. Switching this off will do slower compiles but the run does not hang.
When trying to build the unit tests created using the default XCode Unit Test bundle target, it looks like it's stuck on the "Run custom shell script 'Run Script'" phase.
I also notice a high cpu usage on process "otest" to the point where the fans kick in within seconds.
The only useful message I see when expanding the line is
/Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF)
Couldn't open shared capabilities memory GSCapabilities (No such file or directory)
The only option I have at that time is to stop the build.
Have to say I was running unit tests perfectly fine up to this moment but can't say for sure what I did to cause that.
That's on XCode 3.2.4
After updating to 3.2.5 now the run script does fail with an error
Test rig '/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.2.sdk/Developer/usr/bin/otest' exited abnormally with code 138 (it may have crashed).
Guess they problem is related?
Did find some answers on SO about how exception handling now works differently when using NSInvocation (which otest seems to use) but not really a solution to this.
I had this happen to me. I made it go away by scrapping my old testing target profile, creating a new one, and pointing all my tests to it. I was too frustrated to compare the profiles line by line to figure out what had changed.
This looks like an infinite loop to me. Try adding some NSLog statements and/or debugging your tests with gdb (by adding otest as a custom executable).
This happened to me after updating to Xcode 9 and using script for updating localizable strings file, a minor bug caused the script to never finish. After updating BartyCrouch, everything worked normally.
https://github.com/Flinesoft/BartyCrouch/issues/66
I'm trying to integrate Robot Framework (an acceptance testing framework) with TeamCity. In order to do this it needs to send messages to the console output which TeamCity will then read and return realtime test progress/results. I'm doing this by calling the command line to run the tests with a simple exec task. Everything seemed to be working other than I was only getting the results at the end of the run and not on the fly.
After a bit of struggling with NAnt I swapped to using MSBuild and everything worked first time.
I have what I need now, but for completeness I'd like to find out why I couldn't get it working with NAnt. As far as I can tell the issue is that NAnt is prefixing all console output with [exec]. Is it possible to suppress this?
I don't think the console output is configurable.
NAnt is open source: you could fork your own copy and/or submit a feature patch.