I have a unit test that takes 200 sec to run. I am trying to use NetBeans profiler to speed it up. But the profiler doesn't run the unit test. It just creates an object of the test and exits. Doesn't run the actual test methods or #Before / #After methods.
This is a maven project with surefire and junit 4.
And partial output is below.
Profiler Agent: Waiting for connection on port 5140, timeout 10 seconds (Protocol version: 9)
Profiler Agent: Established local connection with the tool
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.cris.puzzle.solvers.SudokuSolverTest
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec
Results :
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
Profiler Agent: Connection with agent closed
Profiler Agent: Connection with agent closed
Profiler Agent: Initializing...
Profiler Agent: Options: >C:/Program Files/NetBeans 6.8/profiler3/lib,5140,10<
Profiler Agent: Initialized succesfully
------------------------------------------------------------------------
BUILD SUCCESSFUL
------------------------------------------------------------------------
Total time: 14 seconds
Does anyone know how to make it work? Thank you.
There is a workaround. Move your test code into application code temporarily. Profile it and improve it. Once finished, move back your improved code to JUnit code.
Don't know, what is your OS, but in Win 7 (and probably also Vista), there is the problem with JUnit that it needs to have write permission to its directory (which is in default installation of NetBeans in Program files, and there it has not this access). But in that case, you would probably have problems with JUnit itself from the beginning.
Related
We are using devops to build our .net 4.7.2 application. As part of that, we are running the unit tests which are using the nunit framework and test runner.
It has been running fine for about 18 months, but has just stopped working in the last day :(
It's using the standard template for running the tests and looks like:
- task: VSTest#2
displayName: "Running tests"
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2: |
**\*test*.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.DefaultWorkingDirectory)'
However, now it is failing the step with the following logs:
NUnit Adapter 4.2.0.0: Test execution started
Running all tests in D:\a\1\s\Configuration.Tests\bin\Release\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Configuration.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Api.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\CommunicationTests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Domain.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\packages\NUnit3TestAdapter.4.2.1\build\net35\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
NUnit Adapter 4.2.0.0: Test execution complete
No test is available in D:\a\1\s\Configuration.Tests\bin\Release\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll D:\a\1\s\Configuration.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\Api.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\CommunicationTests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\Domain.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\packages\NUnit3TestAdapter.4.2.1\build\net35\testcentric.engine.metadata.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
##[error]Could not find testhost
Results File: D:\a_temp\TestResults\VssAdministrator_WIN-FVJ4KUK6IFI_2022-08-18_12_38_44.trx
##[error]Test Run Aborted.
Total tests: Unknown
Passed: 110
Total time: 16.7203 Seconds
Vstest.console.exe exited with code 1.
**************** Completed test execution *********************
Test results files: D:\a_temp\TestResults\VssAdministrator_WIN-FVJ4KUK6IFI_2022-08-18_12_38_44.trx
Created test run: 1080
Publishing test results: 112
Publishing test results to test run '1080'.
TestResults To Publish 112, Test run id:1080
Test results publishing 112, remaining: 0. Test run id: 1080
Published test results: 112
Publishing Attachments: 1
Execution Result Code 1 is non zero, checking for failed results
Completed TestExecution Model...
##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
##[error]Error: The process 'D:\a_tasks\VSTest_ef087383-ee5e-42c7-9a53-
ab56c98420f9\2.205.0\Modules\DTAExecutionHost.exe' failed with exit code 1
##[error]Vstest failed with error. Check logs for failures. There might be failed tests.
Finishing: Running tests
Looking through this log, it seems that the nunit tests have run successfully, but it might be trying to run mstests? It is frustrating when devops gets an update and it breaks working pipelines.
We have the similar situation.
The unit tests are run with xUnit.
/TestAdapterPath:"D:\a\1\s" Starting test execution, please wait... A
total of 36 test files matched the specified pattern.
2.4828
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.0273
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.3746
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
1.992
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
4.8409
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.1874
##[error]Could not find testhost
I compared the output of the successful run and the failed run and found the different version of the test platform. If you don't specify a version, the default version will be the latest and probably a preview one. So I add something in YAML to specify a workable version.
- task: VisualStudioTestPlatformInstaller#1
inputs:
versionSelector: 'SpecificVersion'
testPlatformVersion: '17.2.0'
- task: VSTest#2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
codeCoverageEnabled: True
vsTestVersion: 'toolsInstaller'
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/test/vstest?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/vstest-platform-tool-installer?view=azure-devops
Our Azure DevOps pipeline seems to fails after executing exactly 21 tests. It seems like there is some invisible hard limit that's stopping execution after 21 tests. It was working last week with no changes made between then and now. I can't seem to figure out the issue so I thought I would try my luck here.
Here's the output of the failure: (Can't seem to post the full output so here's the relevant portion)
Starting the ALIFEMGSelectionTest_17 test...
Passed ALIFEMGSelectionTest_16 [48 s]
Starting the ALIFEMGSelectionTest_18 test...
Passed ALIFEMGSelectionTest_17 [49 s]
Starting the ALIFEMGSelectionTest_19 test...
Passed ALIFEMGSelectionTest_18 [54 s]
Starting the ALIFEMGSelectionTest_20 test...
Passed ALIFEMGSelectionTest_19 [1 m]
Starting the ALIFEMGSelectionTest_21 test...
Passed ALIFEMGSelectionTest_20 [59 s]
Starting the ALIFEMGSelectionTest_22 test...
Passed ALIFEMGSelectionTest_21 [59 s]
##[error]ALIFEMGSelectionTest_22 test: The remote procedure call failed. (Exception from HRESULT: 0x800706BE)
##[error]ALIFEMGSelectionTest_23 test: Unable to activate the "ALIFEMGSelectionTest_23" test due to the following error: The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)
Skipped ALIFEMGSelectionTest_22 [46 s]
The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)
Skipped ALIFEMGSelectionTest_23
Results File: C:\_work\_temp\TestResults\admin_Desktop-09-30_15_13_56.trx
##[error]Test Run Failed.
Total tests: 21
Passed: 21
Total time: 18.7192 Minutes
Vstest.console.exe exited with code 1.
**************** Completed test execution *********************
Test results files: C:\_work\_temp\TestResults\DESKTOP.trx
Created test run: 3872
Publishing test results: 23
Publishing test results to test run '3872'.
TestResults To Publish 23, Test run id:3872
Test results publishing 23, remaining: 0. Test run id: 3872
Published test results: 23
Publishing Attachments: 1
Execution Result Code 1 is non zero, checking for failed results
Completed TestExecution Model...
##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
##[error]Error: The process 'C:\_work\_tasks\VSTest_ef087383-ee5e-42c7-9a53-ab56c9\2.170.1\Modules\DTAExecutionHost.exe' failed with exit code 1
##[error]Vstest failed with error. Check logs for failures. There might be failed tests.
Finishing: VSTest
I am trying to run the keycloak Testsuite against an external Keycloak server that I have created.
I am using the base tests in the integration-arquillian using the following commands
mvn -f testsuite/integration-arquillian/tests/base/pom.xml clean install --log-file My_testsuite_integration_logs06.txt -Pauth-server-wildfly -Dauth.server.ssl.required=false -Dpageload.timeout=3600000 -Dauth.server.host={my-server-details} -Dauth.server.http.port={port#}
It works when I am using the embedded tests, but when I add the server details as stated in the HOW-TO-RUN.md file its failing.
https://github.com/keycloak/keycloak/tree/stage/testsuite/integration-arquillian
========
[INFO] Running org.keycloak.testsuite.account.AccountFormServiceTest
11:56:38,313 ERROR [org.keycloak.testsuite.account.AccountFormServiceTest] [AccountFormServiceTest] null() FAILED
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.003 s <<< FAILURE! - in org.keycloak.testsuite.account.AccountFormServiceTest
[ERROR] org.keycloak.testsuite.account.AccountFormServiceTest Time elapsed: 0.003 s <<< ERROR!
java.lang.RuntimeException: Arquillian initialization has already been attempted, but failed. See previous exceptions for cause
at org.jboss.arquillian.junit.AdaptorManagerWithNotifier.handleSuiteLevelFailure(AdaptorManagerWithNotifier.java:36)
at org.jboss.arquillian.junit.AdaptorManager.initializeAdaptor(AdaptorManager.java:16)
at org.jboss.arquillian.junit.AdaptorManagerWithNotifier.initializeAdaptor(AdaptorManagerWithNotifier.java:19)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:109)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.jboss.arquillian.container.spi.client.container.LifecycleException: The java process starting the managed server exited unexpectedly with code [2]
at org.jboss.as.arquillian.container.managed.ManagedDeployableContainer.startInternal(ManagedDeployableContainer.java:152)
at org.jboss.as.arquillian.container.CommonDeployableContainer.start(CommonDeployableContainer.java:123)
...
==
Getting similiar to above for all my tests and not sure why it is failing.
any help be great.
Your description of the issue is quiet vague, but have you tried
mvn -f testsuite/integration-arquillian/tests/base/pom.xml clean install -Pauth-server-remote -Dauth.server.ssl.required=false Dpageload.timeout=3600000 -Dauth.server.host={my-server-details} -Dauth.server.http.port={port#}
The Profile defined could be where the issue lies.
If you take a look at the documentation https://github.com/keycloak/keycloak/blob/stage/testsuite/integration-arquillian/HOW-TO-RUN.md#remote-server-tests
It shows you how to build the keycloak server from source and run it (remote or local) and the above command is then used to run tests against this solution. Take note of
"The testsuite currently doesn't work with port 80."
I am executing CTS on Jacinto 6 Evaluation Module (ti-jacinto6evm) and I'm encountering a number of test case failures that I don't understand.
I started by building both AOSP and CTS. Both builds were just fine. I can flash my test hardware (ti-jacinto6evm) and then I followed the instructions for setting up CTS. I have run CTS for more then 10 times on the same device and every time I got different results. The ti-jacinto6 device randomly gets hanged during execution of the test cases.
Most of the time target gets hanged and it show following error:
Reason: 'Failed to receive adb shell test output within 600000 ms. Test may have timed out, or adb connection to device became unresponsive'. Check device logcat for details
Device 170090035a700002 shell is unresponsive
05-30 04:52:21 W/TestInvocation: Invocation did not complete due to device 170090035a700002 becoming not available. Reason: Could not find device 170090035a700002
on the below test cases my target hangs:
CtsPreference2TestCases
CtsUiHostTestCases
CtsServicesHostTestCases
CtsTrustedVoiceHostTestCases
CtsTransitionTestCases
CtsAppTestCases
CtsGraphicsTestCases
CtsCameraTestCases
CtsWebkitTestCases
CtsFragmentTestCases
CtsViewTestCases
So I just excluded those test cases from the CTS and again ran CTS with the following command:
run cts --skip-preconditions --exclude-filter CtsPreference2TestCases --exclude-filter CtsServicesHostTestCases --exclude-filter CtsUiHostTestCases --exclude-filter CtsTrustedVoiceHostTestCases --exclude-filter CtsAppTestCases --exclude-filter CtsGraphicsTestCases --exclude-filter CtsTransitionTestCases --exclude-filter CtsCameraTestCases --exclude-filter CtsWebkitTestCases --exclude-filter CtsFragmentTestCases --plan cts
Problem 1
I am facing a problem where some test cases are running properly for the first time, but when I run CTS for the second time, they fail some passed test case(s).
1st iteration. On this iteration 166 modules passed:
Testcase name
Passed
Failed
Total executed
armeabi-v7a CtsWebkitTestCases
201
12
213
2nd iteration. On this iteration 91 modules passed:
Testcase name
Passed
Failed
Total executed
armeabi-v7a CtsWebkitTestCases
80
1
81
Problem 2
When CTS gets stuck on some testcases it shows a TimeoutException:
com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:767)
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:736)
at com.android.ddmlib.AdbHelper.readAdbResponse(AdbHelper.java:222)
at com.android.ddmlib.AdbHelper.executeRemoteCommand(AdbHelper.java:456)
at com.android.ddmlib.AdbHelper.executeRemoteCommand(AdbHelper.java:382)
at com.android.ddmlib.Device.executeShellCommand(Device.java:617)
at com.android.tradefed.device.NativeDeviceStateMonitor.waitForDeviceShell(NativeDeviceStateMonitor.java:170)
at com.android.tradefed.device.WaitDeviceRecovery.recoverDevice(WaitDeviceRecovery.java:142)
at com.android.tradefed.device.NativeDevice.recoverDevice(NativeDevice.java:1720)
at com.android.tradefed.device.NativeDevice.performDeviceAction(NativeDevice.java:1661)
at com.android.tradefed.device.NativeDevice.runInstrumentationTests(NativeDevice.java:615)
at com.android.tradefed.device.NativeDevice.runInstrumentationTests(NativeDevice.java:698)
at com.android.tradefed.testtype.InstrumentationTest.runWithRerun(InstrumentationTest.java:797)
at com.android.tradefed.testtype.InstrumentationTest.doTestRun(InstrumentationTest.java:740)
at com.android.tradefed.testtype.InstrumentationTest.run(InstrumentationTest.java:643)
at com.android.tradefed.testtype.AndroidJUnitTest.run(AndroidJUnitTest.java:233)
at com.android.compatibility.common.tradefed.testtype.ModuleDef.run(ModuleDef.java:250)
at com.android.compatibility.common.tradefed.testtype.CompatibilityTest.run(CompatibilityTest.java:506)
at com.android.tradefed.invoker.TestInvocation.runTests(TestInvocation.java:761)
at com.android.tradefed.invoker.TestInvocation.prepareAndRun(TestInvocation.java:446)
at com.android.tradefed.invoker.TestInvocation.performInvocation(TestInvocation.java:300)
at com.android.tradefed.invoker.TestInvocation.invoke(TestInvocation.java:886)
at com.android.tradefed.command.CommandScheduler$InvocationThread.run(CommandScheduler.java:567)
What is the reason behind this failure?
No need to rerun the tests that already past you can continue and run only the tests that failed or were not ran, use command
l r
to get the results and then use the first column number session id of the run to continue test from like this:
run cts --retry 12
where 12 is the run session id displayed in the first column of l r.
adb disconnection indeed can affect the test cases, I use this small script to reconnect, you can modify it to suit your needs.
cat adb_retry.sh:
while :
do
if ((`adb devices | wc -l` < 3 )); then
echo Connection for $1 droped out
echo retrying
adb connect "$1"
fi
sleep 5
echo Watching...
done
We have recently upgraded to angular 5. Since then my protractor tests started failing with reason " Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.".
All these tests were working fine before.
Protractor version : 5.2.0
karma version: 1.7.0
Highly appreciate your suggestions.
Thanks
This is a Jasmine timeout, see the Protractor guidance on Jasmine timeouts:
Timeouts from Jasmine
Spec Timeout
If a spec (an 'it' block) takes
longer than the Jasmine timeout for any reason, it will fail.
Looks like: a failure in your test results - timeout: timed out after
30000 msec waiting for spec to complete
Default timeout: 30 seconds
How to change: To change for all specs, add jasmineNodeOpts:
{defaultTimeoutInterval: timeout_in_millis} to your Protractor
configuration file. To change for one individual spec, pass a third
parameter to it: it(description, testFn, timeout_in_millis).
Try to debug your test, instructions here. Following any change, including an upgrade, it's possible your test may be broken; resulting in it hanging beyond the duration of the default Jasmine timeout.
A lazy option would be to increase your Jasmine timeout excessively, to see if your test fails with a different exception.