Protractor exits with retcode 1 if tests are skipped/pended - protractor

My build pipeline relies on the Protractor process terminating with a non-successful retcode when there are errors. All my tests pass, but I just added a test that is pended (using Jasmine's pending('reason').) This is causing Protractor to exit with a retcode of 1 and causing pipeline issues.
I've already patched the Jasmine spec reporter to correctly identify pended tests as non-failures. How can I do something similar to keep Protractor from existing with a failure code? It still thinks there are test failures, so either it determines the run state before it hits my custom reporter, or it's using some other mechanism.
This is what my Logs show:
[2021-01-29T06:06:46.352Z] **************************************************
[2021-01-29T06:06:46.352Z] * Pending *
[2021-01-29T06:06:46.352Z] **************************************************
[2021-01-29T06:06:46.352Z]
[2021-01-29T06:06:46.352Z] 1) Sample pended test
[2021-01-29T06:06:46.352Z] Pended as an example
[2021-01-29T06:06:46.352Z]
[2021-01-29T06:06:46.352Z] Executed 19 of 22 specs INCOMPLETE (1 PENDING) (2 SKIPPED) in 3 mins 49 secs.
[2021-01-29T06:06:46.352Z] [06:06:46] I/launcher - 0 instance(s) of WebDriver still running
[2021-01-29T06:06:46.352Z] [06:06:46] I/launcher - chrome #01 failed 1 test(s)
[2021-01-29T06:06:46.352Z] [06:06:46] I/launcher - overall: 1 failed spec(s)
[2021-01-29T06:06:46.352Z] [06:06:46] E/launcher - Process exited with error code 1
It seems as though Protractor itself is still considering the test as a failure, based on the 'overall: 1 failed spec(s)' message. How can I get Protractor to not consider pended tests as failed, and return exit code 0?

So looks like when you do pending('reason') the code automatically passes to 1 which is non successful.
The problem is, if protractor encounters the real error its value will also be 1, so you can't really catch it easily and be sure this is coming from pending tests
But, since you're using Jasmine 3.6.3, you may use another advantage it gives you. Instead of using pending you can use xit. This will disable your it block. More info https://jasmine.github.io/api/2.7/global.html#xit

Related

devops VSTest#2: ##[error]Could not find testhost

We are using devops to build our .net 4.7.2 application. As part of that, we are running the unit tests which are using the nunit framework and test runner.
It has been running fine for about 18 months, but has just stopped working in the last day :(
It's using the standard template for running the tests and looks like:
- task: VSTest#2
displayName: "Running tests"
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2: |
**\*test*.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.DefaultWorkingDirectory)'
However, now it is failing the step with the following logs:
NUnit Adapter 4.2.0.0: Test execution started
Running all tests in D:\a\1\s\Configuration.Tests\bin\Release\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Configuration.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Api.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\CommunicationTests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\Domain.Tests\bin\Release\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
Running all tests in D:\a\1\s\packages\NUnit3TestAdapter.4.2.1\build\net35\testcentric.engine.metadata.dll
NUnit3TestExecutor discovered 0 of 0 NUnit test cases using Current Discovery mode, Explicit run
NUnit Adapter 4.2.0.0: Test execution complete
No test is available in D:\a\1\s\Configuration.Tests\bin\Release\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll D:\a\1\s\Configuration.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\Api.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\CommunicationTests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\Domain.Tests\bin\Release\testcentric.engine.metadata.dll D:\a\1\s\packages\NUnit3TestAdapter.4.2.1\build\net35\testcentric.engine.metadata.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
##[error]Could not find testhost
Results File: D:\a_temp\TestResults\VssAdministrator_WIN-FVJ4KUK6IFI_2022-08-18_12_38_44.trx
##[error]Test Run Aborted.
Total tests: Unknown
Passed: 110
Total time: 16.7203 Seconds
Vstest.console.exe exited with code 1.
**************** Completed test execution *********************
Test results files: D:\a_temp\TestResults\VssAdministrator_WIN-FVJ4KUK6IFI_2022-08-18_12_38_44.trx
Created test run: 1080
Publishing test results: 112
Publishing test results to test run '1080'.
TestResults To Publish 112, Test run id:1080
Test results publishing 112, remaining: 0. Test run id: 1080
Published test results: 112
Publishing Attachments: 1
Execution Result Code 1 is non zero, checking for failed results
Completed TestExecution Model...
##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
##[error]Error: The process 'D:\a_tasks\VSTest_ef087383-ee5e-42c7-9a53-
ab56c98420f9\2.205.0\Modules\DTAExecutionHost.exe' failed with exit code 1
##[error]Vstest failed with error. Check logs for failures. There might be failed tests.
Finishing: Running tests
Looking through this log, it seems that the nunit tests have run successfully, but it might be trying to run mstests? It is frustrating when devops gets an update and it breaks working pipelines.
We have the similar situation.
The unit tests are run with xUnit.
/TestAdapterPath:"D:\a\1\s" Starting test execution, please wait... A
total of 36 test files matched the specified pattern.
2.4828
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.0273
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.3746
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
1.992
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
4.8409
##[error]Could not find testhost
Data collector 'Code Coverage' message: No code coverage data
available. Profiler was not initialized..
2.1874
##[error]Could not find testhost
I compared the output of the successful run and the failed run and found the different version of the test platform. If you don't specify a version, the default version will be the latest and probably a preview one. So I add something in YAML to specify a workable version.
- task: VisualStudioTestPlatformInstaller#1
inputs:
versionSelector: 'SpecificVersion'
testPlatformVersion: '17.2.0'
- task: VSTest#2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
codeCoverageEnabled: True
vsTestVersion: 'toolsInstaller'
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/test/vstest?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/vstest-platform-tool-installer?view=azure-devops

Where to find error logs in yocto project?

I got some errors while generating linux-imx-image.
WARNING: opencv-4.5.2.imx-r0 do_fetch: Failed to fetch URL git://github.com/opencv/opencv_extra.git;destsuffix=extra;name=extra, attempting MIRRORS if available
NOTE: Tasks Summary: Attempted 2831 tasks of which 0 didn't need to be rerun and 2 failed.
Summary: 2 tasks failed: /home/rohan/imx-yocto-bsp/sources/meta-openembedded/meta-oe/recipes-graphics/gphoto2/libgphoto2_2.5.27.bb:do_fetch /home/rohan/imx-yocto-bsp/sources/meta-openembedded/meta-oe/recipes-graphics/tesseract/tesseract_4.1.1.bb:do_fetch Summary: There were 8 WARNING messages shown. Summary: There were 4 ERROR messages shown, returning a non-zero exit code.
I am not sure where the errors and warnings are located.

Azure Devops - Release Pipeline when re-running failed tests azure devops shows failure status even if re-run succeeded

I use Specflow with SpecRunner+ I am using the Deafult.srprofile to to re-run failed tests 3 times in visual studio it shows 2passed 1 failed but the status of the test is a failure, the same goes for azure devops if a re-ran test passes the outcome of the run is a failure. The Failures are sometimes caused by locator timeouts or server timeouts not often but saw it happen few time thats why we decided to implement a re-run.
Could anyone help on this?
022-02-09T12:40:13.8607507Z Test Run Failed.
2022-02-09T12:40:13.8608607Z Total tests: 37
2022-02-09T12:40:13.8609271Z Passed: 36
2022-02-09T12:40:13.8609858Z Failed: 1
2022-02-09T12:40:13.8617476Z Total time: 7.4559 Minutes
2022-02-09T12:40:13.9226929Z ##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
2022-02-09T12:40:14.0075402Z ##[error]Error: The process 'D:\Microsoft_Visual_Studio\2019\Common7\IDE\Extensions\TestPlatform\vstest.console.exe' failed with exit code 1
2022-02-09T12:40:14.8164576Z ##[error]VsTest task failed.
But then the report states that it was retried 3 times which 2 of the retries were seccusefull but still a failure status on the azure devops run.
The behavior of the report is the correct one and sadly this can't be configured to be changed.
What you can do is to adjust how the results are reported back to Azure DevOps.
You can configure it via the VSTest element in the srProfile- File.
This example means, that at least one retry has to be passing:
<VSTest testRetryResults="Unified" passRateAbsolute="1"/>
Docs: https://docs.specflow.org/projects/specflow-runner/en/latest/Profile/VSTest.html
Be aware that we have stopped the development of the SpecFlow+ Runner. More details here: https://specflow.org/using-specflow/the-retirement-of-specflow-runner/

RPC Error when running more than 21 tests via VSTest adapter in Azure Devops pipeline

Our Azure DevOps pipeline seems to fails after executing exactly 21 tests. It seems like there is some invisible hard limit that's stopping execution after 21 tests. It was working last week with no changes made between then and now. I can't seem to figure out the issue so I thought I would try my luck here.
Here's the output of the failure: (Can't seem to post the full output so here's the relevant portion)
Starting the ALIFEMGSelectionTest_17 test...
Passed ALIFEMGSelectionTest_16 [48 s]
Starting the ALIFEMGSelectionTest_18 test...
Passed ALIFEMGSelectionTest_17 [49 s]
Starting the ALIFEMGSelectionTest_19 test...
Passed ALIFEMGSelectionTest_18 [54 s]
Starting the ALIFEMGSelectionTest_20 test...
Passed ALIFEMGSelectionTest_19 [1 m]
Starting the ALIFEMGSelectionTest_21 test...
Passed ALIFEMGSelectionTest_20 [59 s]
Starting the ALIFEMGSelectionTest_22 test...
Passed ALIFEMGSelectionTest_21 [59 s]
##[error]ALIFEMGSelectionTest_22 test: The remote procedure call failed. (Exception from HRESULT: 0x800706BE)
##[error]ALIFEMGSelectionTest_23 test: Unable to activate the "ALIFEMGSelectionTest_23" test due to the following error: The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)
Skipped ALIFEMGSelectionTest_22 [46 s]
The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)
Skipped ALIFEMGSelectionTest_23
Results File: C:\_work\_temp\TestResults\admin_Desktop-09-30_15_13_56.trx
##[error]Test Run Failed.
Total tests: 21
Passed: 21
Total time: 18.7192 Minutes
Vstest.console.exe exited with code 1.
**************** Completed test execution *********************
Test results files: C:\_work\_temp\TestResults\DESKTOP.trx
Created test run: 3872
Publishing test results: 23
Publishing test results to test run '3872'.
TestResults To Publish 23, Test run id:3872
Test results publishing 23, remaining: 0. Test run id: 3872
Published test results: 23
Publishing Attachments: 1
Execution Result Code 1 is non zero, checking for failed results
Completed TestExecution Model...
##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
##[error]Error: The process 'C:\_work\_tasks\VSTest_ef087383-ee5e-42c7-9a53-ab56c9\2.170.1\Modules\DTAExecutionHost.exe' failed with exit code 1
##[error]Vstest failed with error. Check logs for failures. There might be failed tests.
Finishing: VSTest

When I run CTS after a few hours the adb connection to device becomes unresponsive

I am executing CTS on Jacinto 6 Evaluation Module (ti-jacinto6evm) and I'm encountering a number of test case failures that I don't understand.
I started by building both AOSP and CTS. Both builds were just fine. I can flash my test hardware (ti-jacinto6evm) and then I followed the instructions for setting up CTS. I have run CTS for more then 10 times on the same device and every time I got different results. The ti-jacinto6 device randomly gets hanged during execution of the test cases.
Most of the time target gets hanged and it show following error:
Reason: 'Failed to receive adb shell test output within 600000 ms. Test may have timed out, or adb connection to device became unresponsive'. Check device logcat for details
Device 170090035a700002 shell is unresponsive
05-30 04:52:21 W/TestInvocation: Invocation did not complete due to device 170090035a700002 becoming not available. Reason: Could not find device 170090035a700002
on the below test cases my target hangs:
CtsPreference2TestCases
CtsUiHostTestCases
CtsServicesHostTestCases
CtsTrustedVoiceHostTestCases
CtsTransitionTestCases
CtsAppTestCases
CtsGraphicsTestCases
CtsCameraTestCases
CtsWebkitTestCases
CtsFragmentTestCases
CtsViewTestCases
So I just excluded those test cases from the CTS and again ran CTS with the following command:
run cts --skip-preconditions --exclude-filter CtsPreference2TestCases --exclude-filter CtsServicesHostTestCases --exclude-filter CtsUiHostTestCases --exclude-filter CtsTrustedVoiceHostTestCases --exclude-filter CtsAppTestCases --exclude-filter CtsGraphicsTestCases --exclude-filter CtsTransitionTestCases --exclude-filter CtsCameraTestCases --exclude-filter CtsWebkitTestCases --exclude-filter CtsFragmentTestCases --plan cts
Problem 1
I am facing a problem where some test cases are running properly for the first time, but when I run CTS for the second time, they fail some passed test case(s).
1st iteration. On this iteration 166 modules passed:
Testcase name
Passed
Failed
Total executed
armeabi-v7a CtsWebkitTestCases
201
12
213
2nd iteration. On this iteration 91 modules passed:
Testcase name
Passed
Failed
Total executed
armeabi-v7a CtsWebkitTestCases
80
1
81
Problem 2
When CTS gets stuck on some testcases it shows a TimeoutException:
com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:767)
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:736)
at com.android.ddmlib.AdbHelper.readAdbResponse(AdbHelper.java:222)
at com.android.ddmlib.AdbHelper.executeRemoteCommand(AdbHelper.java:456)
at com.android.ddmlib.AdbHelper.executeRemoteCommand(AdbHelper.java:382)
at com.android.ddmlib.Device.executeShellCommand(Device.java:617)
at com.android.tradefed.device.NativeDeviceStateMonitor.waitForDeviceShell(NativeDeviceStateMonitor.java:170)
at com.android.tradefed.device.WaitDeviceRecovery.recoverDevice(WaitDeviceRecovery.java:142)
at com.android.tradefed.device.NativeDevice.recoverDevice(NativeDevice.java:1720)
at com.android.tradefed.device.NativeDevice.performDeviceAction(NativeDevice.java:1661)
at com.android.tradefed.device.NativeDevice.runInstrumentationTests(NativeDevice.java:615)
at com.android.tradefed.device.NativeDevice.runInstrumentationTests(NativeDevice.java:698)
at com.android.tradefed.testtype.InstrumentationTest.runWithRerun(InstrumentationTest.java:797)
at com.android.tradefed.testtype.InstrumentationTest.doTestRun(InstrumentationTest.java:740)
at com.android.tradefed.testtype.InstrumentationTest.run(InstrumentationTest.java:643)
at com.android.tradefed.testtype.AndroidJUnitTest.run(AndroidJUnitTest.java:233)
at com.android.compatibility.common.tradefed.testtype.ModuleDef.run(ModuleDef.java:250)
at com.android.compatibility.common.tradefed.testtype.CompatibilityTest.run(CompatibilityTest.java:506)
at com.android.tradefed.invoker.TestInvocation.runTests(TestInvocation.java:761)
at com.android.tradefed.invoker.TestInvocation.prepareAndRun(TestInvocation.java:446)
at com.android.tradefed.invoker.TestInvocation.performInvocation(TestInvocation.java:300)
at com.android.tradefed.invoker.TestInvocation.invoke(TestInvocation.java:886)
at com.android.tradefed.command.CommandScheduler$InvocationThread.run(CommandScheduler.java:567)
What is the reason behind this failure?
No need to rerun the tests that already past you can continue and run only the tests that failed or were not ran, use command
l r
to get the results and then use the first column number session id of the run to continue test from like this:
run cts --retry 12
where 12 is the run session id displayed in the first column of l r.
adb disconnection indeed can affect the test cases, I use this small script to reconnect, you can modify it to suit your needs.
cat adb_retry.sh:
while :
do
if ((`adb devices | wc -l` < 3 )); then
echo Connection for $1 droped out
echo retrying
adb connect "$1"
fi
sleep 5
echo Watching...
done