-1
Could someone help me. I have a hard time to implement UITests and store the result for future upload to TestRail (when all tests is completed.
I have testRailmanager with two method
createRun() (this method make api call returning testRunID used by second endpoint)
sendResults([TestResult]]
Everything is working if I have only one XCTestCase with multiple test methods for example: RegistrationUITests: XCTestcase)
Unfortunately if I add multiple files with multiple XCTestCase like this and run parallel tests
TestLoginUITests.swift
TestDashboardUITests.swift
TestEventsUITests.swift etc. the manager is not shared between test.
I tried singleton TestRailManager.shared, I tried have a manager as global constant. But nothing
What is the easiest way to share the data between multiple XCTests classes running parralel? Thank you
Related
I'm writing the UI tests for an application that contains huge cells(~ 1000) in its tableview. Trying to access the cell elements will show the below error:
Failed to get matching snapshots: Timed out while evaluating UI query.
Scenarios:
If I try to get the cells count by XCUIApplication().tables.firstMatch.cells.count, it throws the exception
Printing XCUIApplication().debugDescription for the first time prints the whole hierarchy (though, it takes ~10secs to print)
After that, If I try to print the exact same line XCUIApplication().debugDescription, throws an exception
I can not check the cells counts and can't able to access the cell elements. The system is trying to evaluate all UI elements whenever I access an element in the XCUIApplication().
This is the expected behaviour, so I thought of making a copy of XCUIApplication() data locally and deal with my queries with that locally saved instance. So, I tried this:
private lazy var dummyApp: XCUIApplication = {
return XCUIApplication()
}()
Here, I used a lazy variable(because I want to call the XCUIApplication() only once to stop the system from taking the snapshots) that returns XCUIApplication() instance and tried to print the cell counts like:
dummyApp.tables.firstMatch.cells.count
This also throwing the same error.
Question:
Is there a way to save XCUIApplication()'s whole structure with a local variable? Or can I stop/extend the snapshot process before accessing an element?
P.S: I'm using Xcode 11.3.1. I'm facing this issue for a long time. Posting this problem as a separate question since XCUITest changed its interaction with the application from Xcode 9.
Answer:
You can use
let snapshot = app.snapshot()
Which gives you a snapshot of the app and all the elements and subelements.
https://developer.apple.com/documentation/xctest/xcuielementsnapshot
https://developer.apple.com/documentation/xctest/xcuielementattributes
Talking about performance.
Your UITableView is too big for black-box testing frameworks like XCTest (Appium, Katalon Studio etc).
If you want to test it, you should consider switching to EarlGrey 2.0 (or other grey-box frameworks). The good thing is you can use EarlGrey 2.0 alongside your existing XCTest tests.
You can read more about testing frameworks performance in this article https://devexperts.com/blog/ios-ui-testing-frameworks-performance-comparison/
P.S. Such big tables are also bad for users. Consider redesigning your UI.
I am writing a regression suite for API's using SCALATEST, I am kind of stuck-up with following scenario:
For instance I have two tests:
test-1{
Call for API-1
Call for API-2
Call for API-3
}
test-2{
Call for API-5
Call for API-6
Call for API-7
}
I have created a generalized function to Call API's I have setup separate JSON files for URI, method, body and headers.
Now my question is that as all these calls will be async, and will be getting back Future Results, one way to handle I know is flatmap / or For within one Test.
But what about 2nd Test, do I need to block main thread here or there is some smart solution for this. I can't afford to run multiple cases in parallel due to inter-dependencies on resources they will be using.
It's better for your tests be executed sequentially, for this please refer to the scalatest user guide on how to deal with Futures
Play will also provide you some utils to handle a Future, the usage is described in the testing documentation
We are developing an ABM under AnyLogic 7 and are at the point where we want to make multiple simulations from a single experiment. Different parameters are to be set for each simulation run so as to generate results for a small suite of standard scenarios.
We have an experiment that auto-starts without the need to press the "Run". Subsequent pressing of the Run does increment the experiment counter and reruns the model.
What we'd like is a way to have the auto-run, or single press of Run, launch a loop of simulations. Within that loop would be the programmatic adjustment of the variables linked to passed parameters.
EDIT- One wrinkle is that some parameters are strings. The Optimization or Parameter Variation experiments don't lend themselves to enumerating a set of strings to be be used across a set of simulation runs. You can set a string per parameter for all the simulation runs within one experiment.
We've used the help sample for "Running a Model from Outside Without Presentation Window", to add the auto-run capability to the initial experiment setup block of code. A method to wait for Run 0 to complete, then dispatch Run 1, 2, etc, is needed.
Pointers to tutorial models with such features, or to a snip of code for the experiment's java blocks are much appreciated.
maybe I don't understand your need but this certainly sounds like you'd want to use a "Parameter Variation" experiment. You can specify which parameters should be varied in which steps and running the experiment automatically starts as many simulation runs as needed, all without animation.
hope that helps
As you, I was confronted to this problem. My aim was to use parameter variation with a model and variation were on non numeric data, and I knew the number of runs to start.
Then i succeed in this task with the help of Custom Variation.
Firstly I build an experiment typed as 'multiple run', create my GUI (user was able to select the string values used in each run.
Then, I create a new java class which inherit from the previous 'multiple run' experiment,
In this class (called MyMultipleRunClass) was present:
- overload of the getMaximumIterations method from default experiment to provide to default anylogic callback the correct number of iteration, and idnex was also used to retrieve my parameter value from array,
- implementation of the static method start,
public static void start() {
prepareBeforeExperimentStart_xjal( MyMultipleRunClass.class);
MyMultipleRunClass ex = new MyMultipleRunClass();
ex.setCommandLuneArguments_xjal(null);
ex.setup(null);
}
Then the experiment to run is the 'empty' customExperiment, which automatically start the other Multiple run experiment thru the presented subclass.
Maybe it exists shortest path, but from my point of view anylogic is correctly used (no trick with non exposed interface) and it works as expected.
I'm relatively new to unit testing and i'm trying to figure out a way to test an XHR request in a meaningful way.
1) The request pulls in various scripts and other resources onto the page, I want to make sure the correct number of resources are being loaded, and that the request is successful.
2) Should I use an actual request to the service that is providing the resource? I looked at fakeserver and fakexhr request on sinonjs.org, but I don't really get how those can provide a meaningful test.
3) I'm testing existing code, which I realize is pretty pointless, but it's what i'm required to do. That being said, there is alot of code in certain methods which could potentially be broken down into various tests. Should I break the existing code down and create tests for my interpreted expectation? Or just write tests for what is actually there?.... if that makes any sense.
Thanks,
-John
I find it useful to use the sinon fakeServer to return various test responses that will exercise my client-side functions. You can set up a series of tests in which a fakeServer response returns data that you can use to subsequently check the behaviour of your code. For example, suppose you expect ten resource objects to be returned, you can create pre-canned xml or json to represent those resources and then check that your code has handled them properly. In another test, what does your code do when you only receive nine objects?
Begin writing your tests to cover your existing code. When those tests pass, begin breaking up your code into easier-to-understand and meaningful units. If the tests still pass, then great, you've just refactored your code and not inadvertently broken anything. Also, now you've got smaller chunks of code that can more readily be tested and understood. From this point on you'll never look back :-)
I have a test application, that contains 29 Test's inside Single TestFixture.I have defined single TestFixtureSetUp and TestTearDown.Each test internally create many Objects which inturn contains many Thread. Till Now I didnt used IDisposable.
My Doubt: Will the objects be disposed after completing each test by the Nuint. Please correct me if i am wrong.
Sorry, if i am wrong.
Thanks,
Vigi
AFAIK the objects will be processed by the garbage collector non-deterministically. Nothing special is done by NUnit to dispose of anything created in the test, I've often had situations where I've crashed the test runner. You'll probably want to manage and destroy the threads inside each test.