Transaction automation with certain condition in smart contract - rest

I am developing and blockchain gaming service now and I want to send some tokens to a winner automatically at the end of game, and at specific times.
/*
* This function should be called every X days
*/
function sendTokens() public {
// We send some tokens to an array of players
}
Currently, I am doing this using traditional Backend technologies such as setInterval and WebSocket - however, this is a centralized method.
What is the best way to do that? What is the professional way?

Every state-change that happens on-chain needs to be triggered by a transaction. Therefore, to run a smart contract function on a schedule, after a specific event, or some other if-else trigger, you need someone to spend gas.
With that being said, you have two options:
1. Decentralized Automation
You can use a decentralized network of oracles to call smart contracts. Using Chainlink Keepers can call the function you specified at any time or event trigger focused. The oracles pay the gas associated with the call, and you pay for a subscription model to the Chainlink nodes.
The way, you're contract stays decentralized all the way down to the automation level, and you never have to worry about a centralized actor intentionally not sending a transaction.
You'd setup the trigger using checkUpkeep by defining in your contract what event you want to wait for (like time based, some event based, etc), and then what you want to do when that trigger is hit in performUpkeep.
An example would look as such:
This contract runs performUpkeep every interval seconds.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.7;
// KeeperCompatible.sol imports the functions from both ./KeeperBase.sol and
// ./interfaces/KeeperCompatibleInterface.sol
import "#chainlink/contracts/src/v0.8/KeeperCompatible.sol";
contract Counter is KeeperCompatibleInterface {
/**
* Public counter variable
*/
uint public counter;
/**
* Use an interval in seconds and a timestamp to slow execution of Upkeep
*/
uint public immutable interval;
uint public lastTimeStamp;
constructor(uint updateInterval) {
interval = updateInterval;
lastTimeStamp = block.timestamp;
counter = 0;
}
function checkUpkeep(bytes calldata /* checkData */) external view override returns (bool upkeepNeeded, bytes memory /* performData */) {
upkeepNeeded = (block.timestamp - lastTimeStamp) > interval;
// We don't use the checkData in this example. The checkData is defined when the Upkeep was registered.
}
function performUpkeep(bytes calldata /* performData */) external override {
//We highly recommend revalidating the upkeep in the performUpkeep function
if ((block.timestamp - lastTimeStamp) > interval ) {
lastTimeStamp = block.timestamp;
counter = counter + 1;
}
// We don't use the performData in this example. The performData is generated by the Keeper's call to your checkUpkeep function
}
}
Other options include Gelato.
2. Centralized Automation
You can also have "traditional" infrastructure call your functions on a schedule, just keep in mind this means that you are relying on a centralized actor in your contracts.
There are forms of centralized automation that you can take such as:
Your own server/set of scripts
Openzeppelin Defender
Tenderly
Incentive Mechanisms for people to call your functions
etc
Disclaimer: I work at Chainlink Labs

Related

Is it even possible to delay or call explicitly a teardown method of a child test case in Katalon Studio?

I am testing this site for concierge medical professionals, written in Zoho.
The flow, of the section in question, is as follows:
once practice is created, user is to:
create practice contract
create rate card for that contract
create any discounts to apply to that rate card
create any retainer schedules
Right now, I have the practice contract and rate card test suites completed, and parameterized/sanity test cases for the discount and retainer schedule test suites. Neither is deterministic right now, as both of those require one "free" rate card, and right now, there's no control for that in the test cases like there is in the rate card test cases.
The rate card test case has, as its initial/teardown steps, logic to create dummy contract if there isn't a "free" contract to assign the rate card to, and we delete the dummy contract if one had to be created:
// creating dummy contracts iff there is less than 1 unassigned contract. Else if there are more, we fail the test right away, because that would require us choose one, which is not, by definition a sanity test case concern.
PracticePage initPage = new PracticePage(practiceProfile.getPracticeURL());
if (!hasUnassignedContract()) {
createDummyContract()
WebUI.verifyEqual(initPage.getNumberOfContracts(), initPage.getNumberOfRateCards() + 1)
}
#TearDown
void close() {
if (PracticePage.NewContractsCreated > 0) {
PracticePage page = new PracticePage(((PracticeProfile)practiceProfile).getPracticeURL());
page.go();
page.deleteFirstContract();
}
}
Discount and retainer schedule test cases, since they require a rate card to work off of, should:
hit up the rate card child test case if the number of discounts/retainer schedules is equal to the number of rate cards, and then
delete any dummy contract that had to be created afterwards (as we are either completing the practice setup, or adding new info just to test that part of the app)
but wait a minute, the contract deletion logic happens in the child test case, which means that the contract is going to be deleted once child test case is done!
How can we either:
delay that teardown until after the drive test case is done, or
call that teardown method from the drive test case on done
?
I don't like having to do this, as it adds another if-condition, but here goes:
move the teardown logic to PracticePage, as a static method :
public static void DeleteFirstContract(String practiceURL) {
if (this.NewContractsCreated > 0) {
PracticePage page = new PracticePage(practiceURL);
page.go();
page.deleteFirstContract();
}
}
give that test case another flag called shouldHandleTeardown , default value = true, and
teardown on that test case becomes:
if (shouldHandleTeardown) PracticePage.DeleteFirstContract(practiceProfile.getPracticeURL())
or something similar, and finally
call that static teardown method from the drive test cases and the other positive test cases in the rate card suite
Unless someone contradicts me on here, no it doesn't seem possible to call, let alone delay the execution of, a method of a child test case from a drive test case.

Spring Reactor | Batching the input without mutating

I'm trying to batch the records constantly emitted from a streaming source (Kafka) and call my service in a batch of 100.
What I get as the input is a single record. I'm trying what's the best way to achieve it in the Reactive way using Spring Reactor without having to have a mutation and locking outside the pipeline.
Here is my naive attempt which simply reflects my sequential way of thinking:
Mono.just(input)
.subscribe(i -> {
batches.add(input);
if(batches.size() >= 100) {
// Invoke another reactive pipeline.
// Clear the batch (requires locking in order to be thread safe).
}
});
What's the best way to achieve batching on a streaming source using reactor.
.buffer(100) or bufferTimeout(100, Duration.ofSeconds(xxx) comes to the rescue
Using Flux.buffer or Flux.bufferTimeout you will be capable of gathering the fixed amount of elements into the List
StepVerifier.create(
Flux.range(0, 1000)
.buffer(100)
)
.expectNextCount(10)
.expectComplete()
.verify()
Update for the use case
In case, when the input is a single value, suppose like an invocation of the method with parameter:
public void invokeMe(String element);
You may adopt UnicastProcessor technique and transfer all data to that processor so then it will take care of batching
class Batcher {
final UnicastProcessor processor = UnicastProcessor.create();
public void invokeMe(String element) {
processor.sink().next(element);
// or Mono.just(element).subscribe(processor);
}
public Flux<List<String>> listen() {
return processor.bufferTimeout(100, Duration.ofSeconds(5));
}
}
Batcher batcher = new Batcher();
StepVerifier.create(
batcher.listen()
)
.then(() -> Flux.range(0, 1000)
.subscribe(i -> batcher.invokeMe("" + i)))
.expectNextCount(10)
.thenCancel()
.verify()
From that example, we might learn how to provide a single point of receiving events and then listen to results of the batching process.
Please note that UnicastPorcessor allows only one subscriber, so it will be useful for the model when there is one interested party in batching results and many data producers. In a case when you have subscribers as many as producers you may want to use one of the next processors -> DirectProcessor, TopicProcessor, WorkerQueueProcessor. To learn more about Reactor Processors follow the link

Where to invoke SagaManager in CQRS even handling

Am new to Microservices and CQRS event handling. I am trying to understand with one simple task. In this task I have three REST external services to handle one transaction/request(Service). The three services are
step1: customer create.
step2: create business for customer
step3: Create Address for business.
I want to implement SAGA for these events with InMemorySagaRepository and saga manager.
Where exactly I have to initiate the SagaManager with repository, Is it in RestController or in CommandHandler ?
Can you please help me in understanding sagas flow ?
Thanks in Advance.
Half a year later, and I'm making an edit as I've now taken a course held by Greg Young called Greg Young's CQRS, Domain Events, Event Sourcing and how to apply DDD
I really recommend it to anyone thinking about CQRS. Help A LOT to understand what things actually are
Original anwser
In our product we use Sagas as something that reacts to events.
This means that our sagas are really just Subscribers to a specific Event. The saga then holds some logic as to whether it should do something or not.
If the saga finds that an action should be taken, it creates a Command which it puts on the CommandBus.
This means that Sagas are just 'reactors' and use the same path in as a user would (skipping the APIs etc).
But what a Saga really is, and what it should do, differs from the one talking about them to the other. (Disclaimer: This is how I read these posts, they might actually all say the same thing, but in a way to fluffy way for me [+my team] to see that)
http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-i-of-ii/ for example, raises the point that Sagas should not contain 'business logic' (anything that contains 'if' is business logic according to the post).
https://msdn.microsoft.com/en-us/library/jj591569.aspx Talks about Sagas as 'Process managers' which coordinate things between different Aggregates (remember that Aggregate1 can't talk to Aggregat2 directly, so a 'Process manager' is required to orchestrate the communication). To put it simply: Event -> Saga -> Command -> Event -> Saga... To reach the final destination.
https://lostechies.com/jimmybogard/2013/03/21/saga-implementation-patterns-variations/ Talks about two different patterns of what a Saga is. One is 'Publish-gatherer' which basically coordinates what should happen based on a Command. The other is 'Reporter', which just reports the status of things to where they need to go. It doesn't coordinate things, it just reports whatever it needs to report.
http://kellabyte.com/2012/05/30/clarifying-the-saga-pattern/ Has a write-up of what the Saga-pattern 'is'. The claim is that Sagas should/could compensate for different workflows that break.
http://cqrs.nu/Faq/sagas Has a very short description on what Sagas are and basically says 'They are state machines that lets aggregates react to other aggregates'.
So, given that, what is it you actually want the Saga to do? Should it coordinate everything? Or should it just react and not care what the Aggregates do?
My edited part
So, after taking the course on CQRS and talking with Greg about this, I've come to the conclusion that there is quite a lot of confusion out there on the web.
Lets start with just the concept 'Saga'. A Saga has actually nothing to do with CQRS. It's not a concept of it. 'Saga' a form of a two-phase-commit, only it's optimised for success rather than fail ( https://en.wikipedia.org/wiki/Compensating_transaction )
Now, what most people mean when they talk CQRS and say "Saga" is "Process Manager". And process managers are quite complicated it seems (Greg has a whole other course for just Process Managers).
Basically what they do is the manage the whole process of something (as the name suggests). The link to Microsoft is pretty much what it's all about.
To answer the question:
Where exactly I have to initiate the SagaManager with repository, Is it in RestController or in CommandHandler ?
Outside of them both. A Process Manager is it's own thing. It spans aggregates and repositories. Conceptually it might be better to look at it as a user doing all the things you want the PM do to, just that you program the users interaction and tell it what to listen for.
Disclaimer: I do not work for Greg, or anyone that stands to gain on my promotion for taking his courses. It's just that I learned a lot from it, so I recommend it just like I would recommend reading Eric Evans book on DDD.
In my application i've build Saga process manager using this MSDN documentation, my Saga is implemented in Application Service layer, it listens Events of Sales, Warehouse & Billing bounded contexts and on event occurrence sends Commands via Service Bus.
Simple example, hope it helps you to analyze how to build your saga (I am registering saga as handler in Composition Root) ;):
SAGA:
public class SalesSaga : Saga<SalesSagaData>,
ISagaStartedBy<OrderPlaced>,
IMessageHandler<StockReserved>,
IMessageHandler<PaymentAccepted>
{
private readonly ISagaPersister storage;
private readonly IBus bus;
public SalesSaga(ISagaPersister storage, IBus bus)
{
this.storage = storage;
this.bus = bus;
}
public void Handle(OrderPlaced message)
{
// Send ReserveStock command
// Save SalesSagaData
}
public void Handle(StockReserved message)
{
// Restore & Update SalesSagaData
// Send BillCustomer command
// Save SalesSagaData
}
public void Handle(PaymentAccepted message)
{
// Restore & Update SalesSagaData
// Send AcceptOrder command
// Complete Saga (Dispose SalesSagaData)
}
}
InMemorySagaPersister: (as SalesSagaDataID i am using OrderID its unique across whole process)
public sealed class InMemorySagaPersister : ISagaPersister
{
private static readonly Lazy<InMemorySagaPersister> instance = new Lazy<InMemorySagaPersister>(() => new InMemorySagaPersister());
private InMemorySagaPersister()
{
}
public static InMemorySagaPersister Instance
{
get
{
return instance.Value;
}
}
ConcurrentDictionary<int, ISagaData> data = new ConcurrentDictionary<int, ISagaData>();
public T GetByID<T>(int id) where T : ISagaData
{
T value;
var tData = new ConcurrentDictionary<int, T>(data.Where(c => c.Value.GetType() == typeof(T))
.Select(c => new KeyValuePair<int, T>(c.Key, (T)c.Value))
.ToArray());
tData.TryGetValue(id, out value);
return value;
}
public bool Save(ISagaData sagaData)
{
bool result;
ISagaData existingValue;
data.TryGetValue(sagaData.Id, out existingValue);
if (existingValue == null)
result = data.TryAdd(sagaData.Id, sagaData);
else
result = data.TryUpdate(sagaData.Id, sagaData, existingValue);
return result;
}
public bool Complete(ISagaData sagaData)
{
ISagaData existingValue;
return data.TryRemove(sagaData.Id, out existingValue);
}
}
One approach might be to have some sort of starting command that starts the Saga. In this scenario it would be configured in your composition root to listen to a certain command type. Once a command has been received in your message dispatcher (or whatever middleware messaging stuff you have) it would look for any Sagas that have been registered to be started by the command. You would then create the Saga and pass it the command. It could then react to other commands and events as they happen.
In your scenario I would suggest your Saga is a type of command handler so the initiation of it would be upon receiving a command

Using burst_read/write with register model

I've a register space of 16 registers.
These are accessible through serial bus (single as well as burst).
I've UVM reg model defined for these registers.
However none of the reg model method supports burst transaction on bus.
As a workaround
I can declare memory model for same space and whenever I need burst access I use memory model but it seems redundant to declare 2 separate classes for same thing and this approach won't mirror register values correctly.
create a function which loops for number of bytes iterations and access registers one by one however this method doesn't create burst transaction on bus.
So I would like to know if there is a way to use burst_read and burst_write methods with register model. It would be nice if burst_read and burst_write support mirroring (current implementation doesn't support this) but if not I can use .predict and .set so its not big concern.
Or can I implement a method for register model easily to support burst operation.
I found this to help get you started:
http://forums.accellera.org/topic/716-uvm-register-model-burst-access/
The guy mentions using the optional 'extension' argument that read/write take. You could store the length of the burst length inside a container object (think int vs. Integer in Java) and then pass that as an argument when calling write() on the first register.
A rough sketch (not tested):
// inside your register sequence
uvm_queue #(int) container = new("container");
container.push_front(4);
start_reg.write(status, data, .extension(container));
// inside your adapter
function uvm_sequence_item reg2bus(const ref uvm_reg_bus_op rw);
int burst_len = 1;
uvm_reg_item reg_item = get_item();
uvm_queue #(int) extension;
if ($cast(extension, reg_item.extension))
burst_len = extension.pop_front();
// do the stuff here based on the burst length
// ...
endfunction
I've used uvm_queue because there isn't any trivial container object in UVM.
After combining opinions provided by Tudor and links in the discussion, here is what works for adding burst operation to reg model.
This implementation doesn't show all the code but only required part for adding burst operation, I've tested it for write and read operation with serial protocols (SPI / I2C). Register model values are updated correctly as well as RTL registers are updated.
Create a class to hold data and burst length:
class burst_class extends uvm_object;
`uvm_object_utils (....);
int burst_length;
byte data [$];
function new (string name);
super.new(name);
endfunction
endclass
Inside register sequence (for read don't initialize data)
burst_class obj;
obj = new ("burstInfo");
obj.burst_length = 4; // replace with actual length
obj.data.push_back (data1);
obj.data.push_back (data2);
obj.data.push_back (data3);
obj.data.push_back (data4);
start_reg.read (status,...., .extension(obj));
start_reg.write (status, ...., .extension (obj));
After successful operation data values should be written or collected in obj object
In adapter class (reg2bus is updated for write and bus2reg is updated for read)
All the information about transaction is available in reg2bus except data in case of read.
adapter class
uvm_reg_item start_reg;
int burst_length;
burst_class adapter_obj;
reg2bus implementation
start_reg = this.get_item;
adapter_obj = new ("adapter_obj");
if($cast (adapter_obj, start_reg.extension)) begin
if (adapter_obj != null) begin
burst_length = adapter_obj.burst_length;
end
else
burst_length = 1; /// so that current implementation of adapter still works
end
Update the size of transaction over here according to burst_length and assign data correctly.
As for read bus2reg needs to be updated
bus2reg implementation (Already has all control information since reg2bus is always executed before bus2reg, use the values captured in reg2bus)
According to burst_length only assign data to object passed though extension in this case adapter_obj

Play 1.2.3 framework - Right way to commit transaction

We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean