Is it even possible to delay or call explicitly a teardown method of a child test case in Katalon Studio? - katalon-studio

I am testing this site for concierge medical professionals, written in Zoho.
The flow, of the section in question, is as follows:
once practice is created, user is to:
create practice contract
create rate card for that contract
create any discounts to apply to that rate card
create any retainer schedules
Right now, I have the practice contract and rate card test suites completed, and parameterized/sanity test cases for the discount and retainer schedule test suites. Neither is deterministic right now, as both of those require one "free" rate card, and right now, there's no control for that in the test cases like there is in the rate card test cases.
The rate card test case has, as its initial/teardown steps, logic to create dummy contract if there isn't a "free" contract to assign the rate card to, and we delete the dummy contract if one had to be created:
// creating dummy contracts iff there is less than 1 unassigned contract. Else if there are more, we fail the test right away, because that would require us choose one, which is not, by definition a sanity test case concern.
PracticePage initPage = new PracticePage(practiceProfile.getPracticeURL());
if (!hasUnassignedContract()) {
createDummyContract()
WebUI.verifyEqual(initPage.getNumberOfContracts(), initPage.getNumberOfRateCards() + 1)
}
#TearDown
void close() {
if (PracticePage.NewContractsCreated > 0) {
PracticePage page = new PracticePage(((PracticeProfile)practiceProfile).getPracticeURL());
page.go();
page.deleteFirstContract();
}
}
Discount and retainer schedule test cases, since they require a rate card to work off of, should:
hit up the rate card child test case if the number of discounts/retainer schedules is equal to the number of rate cards, and then
delete any dummy contract that had to be created afterwards (as we are either completing the practice setup, or adding new info just to test that part of the app)
but wait a minute, the contract deletion logic happens in the child test case, which means that the contract is going to be deleted once child test case is done!
How can we either:
delay that teardown until after the drive test case is done, or
call that teardown method from the drive test case on done
?

I don't like having to do this, as it adds another if-condition, but here goes:
move the teardown logic to PracticePage, as a static method :
public static void DeleteFirstContract(String practiceURL) {
if (this.NewContractsCreated > 0) {
PracticePage page = new PracticePage(practiceURL);
page.go();
page.deleteFirstContract();
}
}
give that test case another flag called shouldHandleTeardown , default value = true, and
teardown on that test case becomes:
if (shouldHandleTeardown) PracticePage.DeleteFirstContract(practiceProfile.getPracticeURL())
or something similar, and finally
call that static teardown method from the drive test cases and the other positive test cases in the rate card suite
Unless someone contradicts me on here, no it doesn't seem possible to call, let alone delay the execution of, a method of a child test case from a drive test case.

Related

Transaction automation with certain condition in smart contract

I am developing and blockchain gaming service now and I want to send some tokens to a winner automatically at the end of game, and at specific times.
/*
* This function should be called every X days
*/
function sendTokens() public {
// We send some tokens to an array of players
}
Currently, I am doing this using traditional Backend technologies such as setInterval and WebSocket - however, this is a centralized method.
What is the best way to do that? What is the professional way?
Every state-change that happens on-chain needs to be triggered by a transaction. Therefore, to run a smart contract function on a schedule, after a specific event, or some other if-else trigger, you need someone to spend gas.
With that being said, you have two options:
1. Decentralized Automation
You can use a decentralized network of oracles to call smart contracts. Using Chainlink Keepers can call the function you specified at any time or event trigger focused. The oracles pay the gas associated with the call, and you pay for a subscription model to the Chainlink nodes.
The way, you're contract stays decentralized all the way down to the automation level, and you never have to worry about a centralized actor intentionally not sending a transaction.
You'd setup the trigger using checkUpkeep by defining in your contract what event you want to wait for (like time based, some event based, etc), and then what you want to do when that trigger is hit in performUpkeep.
An example would look as such:
This contract runs performUpkeep every interval seconds.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.7;
// KeeperCompatible.sol imports the functions from both ./KeeperBase.sol and
// ./interfaces/KeeperCompatibleInterface.sol
import "#chainlink/contracts/src/v0.8/KeeperCompatible.sol";
contract Counter is KeeperCompatibleInterface {
/**
* Public counter variable
*/
uint public counter;
/**
* Use an interval in seconds and a timestamp to slow execution of Upkeep
*/
uint public immutable interval;
uint public lastTimeStamp;
constructor(uint updateInterval) {
interval = updateInterval;
lastTimeStamp = block.timestamp;
counter = 0;
}
function checkUpkeep(bytes calldata /* checkData */) external view override returns (bool upkeepNeeded, bytes memory /* performData */) {
upkeepNeeded = (block.timestamp - lastTimeStamp) > interval;
// We don't use the checkData in this example. The checkData is defined when the Upkeep was registered.
}
function performUpkeep(bytes calldata /* performData */) external override {
//We highly recommend revalidating the upkeep in the performUpkeep function
if ((block.timestamp - lastTimeStamp) > interval ) {
lastTimeStamp = block.timestamp;
counter = counter + 1;
}
// We don't use the performData in this example. The performData is generated by the Keeper's call to your checkUpkeep function
}
}
Other options include Gelato.
2. Centralized Automation
You can also have "traditional" infrastructure call your functions on a schedule, just keep in mind this means that you are relying on a centralized actor in your contracts.
There are forms of centralized automation that you can take such as:
Your own server/set of scripts
Openzeppelin Defender
Tenderly
Incentive Mechanisms for people to call your functions
etc
Disclaimer: I work at Chainlink Labs

ServerValue.increment doesn't work properly when Internet goes down

The addition of ServerValue.increment() (Add increment() for atomic field value increments #2437) was a great news as it allows field values ​​to be increased atomically in Firebase RTDB.
I have an application that keeps inventories and this function has been key because it allows updating the inventory regardless of whether the user is offline at times. However, I started to notice that sometimes the function is executed twice, which completely misstates the inventory in the wrong way.
To isolate the problem I decided to do the following test, which shows that ServerValue.Increment() works wrong when the connection goes from Online to Offline:
Make a for loop function from 1 to 200:
for (var i = 1; i <= 200; i++) {
testBloc.incrementTest(i);
print('Pos: $i');
}
The function incrementTest(i) must increment two variables: position (count from 1 in 1 up to 200) and sum (add 1 + 2 + 3, ..., + 200 which should result in 20,100)
Future<bool> incrementTest(int value) async {
try {
db.child('test/position')
.set(ServerValue.increment(1));
db.child('test/sum')
.set(ServerValue.increment(value));
} catch (e) {
print(e);
}
return true;
}
Note that db refers to the Firebase instance (FirebaseDatabase.instance.reference())
With this, comes the tests:
Test 1: 100% Online. PASSED
The function works properly, reaching the two variables to the correct result (in the Firebase console):
position: 200
sum: 20100
Test 2: 100% Offline. PASSED
To do this I used a physical device in airplane mode, then I executed the for loop function, and when the function finished executing I deactivated airplane mode and checked the result in the firebase console, which was satisfactory:
position: 200
sum: 20100
Test 3: Start Online and then go to Offline. FAILED
It is a typical operating scenario when the Internet Connection goes down. Even worse when the connections are intermittent, you are traveling on a subway or you are in a low coverage site for which Offline Persistence is a desired feature. To simulate it, what I did was run the for loop function in online mode, and before it finished, I put the physical device in airplane mode. Later I went Online to finish the test and see the results on the Firebase console. The results obtained are incorrect in all cases. Here are some of the results:
As you can see, the Increment was erroneously repeated 10, 18 and 9 times more.
How can I avoid this behavior?
Is there any other way to increment atomically a number in Firebase that works properly online / Offline ?
firebaser here
That's an interesting edge-case in the increment behavior. Between the client and the server neither can be certain whether the increment was executed or not, so it ends up being retried from the client upon the reconnect. This problem can only occur with the increment operation as far as I can tell, as all the other write operations are idempotent except for transactions, but those don't work while offline.
It is possible to ensure each increment happens only once, but it'll take some work:
First, add a nonce to write operation that unique identifies this operation. You can use a push key for this, but any other UUID also works fine. Combine this with your original set() call into a single multi-path update call, writing the nonce to a top-level node with a server-side timestamp as its value.
Now in your security rules for the top-level location, only allow the write if there is no existing data. This ensures the secondary writes you're seeing get rejected, and since security rules are checked across multi-path updates as a whole, the faulty increment will get rejected too.
You'll probably want to periodically clean up the node with nonce keys, based on the timestamp value in there. It won't matter for performance (since you're never searching here outside of during the cleanup), but may help control the storage cost for the nonces.
I haven't used this approach for this specific use-case yet, but have done it for others. If you'd include a client-side retry, the above essentially builds your own multi-path transaction mechanism, which is what I needed it for in the past. But since you don't need that here, it's simpler without that.
Based on #puf answer, you can proceed as follows:
Future<bool> incrementTest(int value, int dateOfToday) async {
var id = db.push().key;
Map<String, dynamic> _updates = {
'test/position': ServerValue.increment(1),
'test/sum': ServerValue.increment(value),
'test/nonce/$id': dateOfToday,
};
db.child('previousPath').update(_updates)
.catchError((error) => print('Increment Duplication Rejected ${error.message}'));
return true;
}
Then, in Firebase Security Rules, you need to add a rule in test/nonce/id location. Something as follows:
{
"previousPath": {
"test": {
".read": "auth != null", //It depends on your root rules
".write": "auth != null", //It depends on your root rules
"nonce": {
"$nonce_id": {
".validate": "!data.exists()" //THE MAGIC IS HERE
}
}
}
}
}
In this way, when the device tries to write to the database again (wrongly), Firebase will reject it since it already had a write with that same ID before.
I hope it serves someone else!!!

How do I choose both exact quanitity wait for and quanitity available in the same pick-up block?

So I have a vehicle which is going to pick up exact quantity (Wait for) the first 50 minutes in the simulation. After those 50 minutes have gone by, I want the same vehicle to pick up quantity (if available). How do I go about this?
Alternative approach (avoiding complex Java coding) is to use 2 Pickup blocks, each with a different setup. Put a SelectOutput block before them and route agents into the respective block using time()>50*minute() in the SelectOutput condition
Set it up for the first setup by default.
Create an event to trigger after 50 mins and make it execute this code:
myPickupObject.set_pickupType(PickupType.QUANTITY);
Here is a way to allow container entity to wait in a Pickup for some time t and then leave with whatever entities have been picked up. The example model looks like this:
There are two key components:
ReleaseOnTimeout Dynamic event which has a parameter called '_agent' of type Agent and following code:
for (Object o : pickup.getEmbeddedObjects()) {
// find the Delay object inside Pickup
if (Delay.class == o.getClass().getSuperclass()) {
// remove the container from the Delay
Agent a = ((Delay)o).remove(_agent);
if (a != null) {
// send the removed container into Enter
enter.take(a);
}
}
}
In pickup on enter action contains following code: `create_ReleaseOnTimeout(10, container);
How this works:
pickup is configured to have Exact quantity (wait for) behaviour
a container object enters Pickup block pickup
on entry a dynamic event ReleaseOnTimeout is scheduled in 10 units to check up on container
if sufficient number of entities were available then container picks them up and leaves
alternatively to (4), if by the time 10 units elapsed the container is still stuck in pickup then it will be removed and put into enter

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

Play 1.2.3 framework - Right way to commit transaction

We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean