AWS Device Farm- Appium Python - Order of tests - pytest

I'm using Appium-Python with AWS Device farm. And I noticed that AWS run my tests in a random order.
Since my tests are part dependent I need to find a way to tell AWS to run my tests in a specific order.
Any ideas about how can I accomplish that?
Thanks

I work for the AWS Device Farm team.
This seems like an old thread but I will answer so that it is helpful to everyone in future.
Device Farm parses the test in a random order. In case of Appium Python it will be the order what is received from pytest --collect-only.
This order may change across executions.
The only way to guarantee an order right now is to wrap all the test calls in a new test which will be the only test called. Although not the prettiest solution this is the only way to achieve this today.
We are working on bringing more parity between your local environment and Device Farm in the coming weeks.

Related

How to write Integration test: Forced logout by logging in from another terminal? #Flutter

I am using webview for logging in my application. If i log in that account on device A, device B automatically logs out. Thanks for the help
There are a couple different paths you could take on this one:
1. Fake the check if another device logs out.
For this one, whatever call you make to see if the user should automatically log out, inject a mock or a fake to force it to say "yes, log out".
I'd suggest combining this with a separate test on the backend that ensures that when someone logs in, the old credentials are flagged as expired as well. Between the two, you have good coverage that the system as a whole works.
2. Manually test this functionality with two devices.
This is pretty self explanatory, but trying to get two separate automated tests to run in the proper order on two different devices consistently is probably not worth the effort. Especially when a quick manual test can verify this functionality when a full end to end validation is required.
It wouldn't be a bad idea to do both options above. #1 for quick validations during development, then do #2 as part of your major release process.

How can run multiple scenarios in citrus simulator apart from the default scenarios?

I am working on citrus-simulator, I am able to run scenarios which are only default scenarios with "citrus.simulator.defaultScenario=default" property, but i want to run multiple scenarios at a time with multiple rest calls with single simulator runner. How could i achieve this? can any one suggest.

Create an order using transactions API

I'm trying to create a simple order using transactions API on actions on Google, for this, I'm using this sample app, but when I try to place the order, the device times out, instead of showing the receipt details. The weird thing is that this code gets executed (I added some logs locally to make sure).
Has anyone been able to run this sample app successfully? I already enabled the actions on Google API on my project on Google cloud, so I'm not sure what I'm missing here.
First, you got to make sure you have enabled transaction support for your app.
If you are testing on the simulator, disable Sandbox mode (checkbox on top right).
Sandbox ensures that any transactions or orders made during simulation are fake
I've ran into this issue before.
You have to keep in mind that the Order ID gets tracked so it needs to be different every time you run the app. That is why I decided to use a UUID time stamp function to make sure the Order ID will be different every time.
I'm referring to wherever it mentions 'UNIQUE_ORDER_ID' in the code. Once you take care of this issue, you will see the receipt every time you run the demo.

Testing a Product that Includes Syncing and other Network Requests

I am nearing the release of an iOS app that syncs and otherwise interacts with a server. I am struggling with a testing procedure that can cover most/all possible situations. I don't have any experience with automated testing so I have been doing everything manually so far with the iPhone simulator and a physical device.
How would I start designing automated tests that can help me get better coverage of possible situations and also serve me well in the future as I make changes and add new features?
You probably need to be more specific in your question. ie. outline how you communicate with your server, what technology is being employed etc.
But as a general approach the first thing I would be doing is looking to find a way to get reproducable results from the server. For example if I send a message asking for a record with an id of 'x' then the server will alwasy return the same record with the same data. There are severa ways to do this, one would be to load a set of test data into your server. Another would be to create a local test server and talk to that instead. Another option is to avoid the server all together in your automaticed tests and mock out the communication classes in your app. It totally depends on what you are trying to test and how.
Once you have your back end dealt with you can then look into automating the tests. This very much depends on how you have dealt with the server. For example, if you are performing an integration style test where you actually talk to a server, then the test might take the form:
Reset or clear the server data.
Load it with predictable data.
Run the iOS app using some testing framework and verify any data sent from the server.
Access the server and verify any changes made there.

What difference is there beween the Windows Azure staging and production areas?

I ask because I had an app working perfectly in staging, but now that it is in production, fiddler tells me the response is a 502 error when I request the page in a browser. Has anyone any idea what might cause this? I could simply leave it in staging, its academic work so its not a big deal, but it is annoying though. Iv waited at least 30 mins and still the same result so I doubt if it is going to do anything.
I believe there's no difference except DNS addressing - and you should be able to swap Staging and Production rather than uploading straight to Production.
One big difference for me is that the staging has a random guid as part of the domain so I can't easily reference in my testing. I also can't easily set it as a target for my web deployments.
So, when I'm in dev mode, I tend to just have two separate web role projects that are both non-guid names and use one as staging and one as relatively stable production for other team members.
That said, for real production, I do use staging because I can test the VIP swap which is a much better way to go live than waiting 20 minutes to see if it worked. (and what if it did not)?
My 2cents.
Resolved, I guess I had to wait 32 minutes. GAE FTW