How to get Job context for recurring job? - jobrunr

I am using below code to create recurring job. I need get the job context to update progress bar. Is there any way to do that?
#Job(name = "xxx")
#Recurring(id = "xxx")
public void execute(){
//do thing here.
}

You can just create your recurring job as follows:
#Job(name = "xxx")
#Recurring(id = "xxx")
public void execute(JobContext jobContext){
//do thing here.
}
It will automatically be injected by JobRunr.

Related

Spring webflux : webClient put call

I have an account service and a product service communicating. When a request comes from a user to purchase a product (I did not include the user service, it is working fine and not the issue), the product service checks to see if there are enough funds in the account, and if there is it updates the balances. The following code works fine:
#GetMapping("/account/{userId}/product/{productId}")
public Mono<ResponseEntity<Product>> checkAccount(#PathVariable("userId") int userId,#PathVariable("productId") int productId){
Mono<Account> account = webClientBuilder.build().get().uri("http://account-service/user/accounts/{userId}/",userId)
.retrieve().bodyToMono(Account.class);
Mono<Product> product = this.ps.findById(productId);
Mono<Boolean> result = account.zipWith(product,this::isAccountBalanceGreater);
Mono<ResponseEntity<Product>> p = result.zipWith(product,this::getResponse);
return p;
}
public boolean isAccountBalanceGreater(Account acc, Product prd) {
return(acc.getBalance()>=prd.getPrice()):
}
public ResponseEntity<Product> getResponse(boolean result,Product prod){
if(result) {
return ResponseEntity.accepted().body(prod);
}else {
return ResponseEntity.badRequest().body(prod);
}
}
My put method in the account service also works fine:
#PutMapping("/account/update/{accountId}")
public Mono<ResponseEntity<Account>> updateAccount(#PathVariable("accountId") int accountId, #RequestBody Account account) {
return as.findById(accountId)
.flatMap(oldAcc->{
oldAcc.setAccountId(account.getAccountId());
oldAcc.setAccountId(account.getAccountId());
oldAcc.setOwner(account.getOwner());
oldAcc.setPin(account.getPin());
oldAcc.setBalance(account.getBalance());
oldAcc.setUserId(account.getUserId());
return ar.save(oldAcc);
}).map(a -> ResponseEntity.ok(a))
.defaultIfEmpty(ResponseEntity.notFound().build());
}
Now I want to be able to update the balances, I have tried this in the isAccountBalancerGreater method:
public boolean isAccountBalanceGreater(Account acc, Product prd) {
if(acc.getBalance() >= prd.getPrice()) {
double newBuyerBalance =acc.getBalance() - prd.getPrice();
Account newOwnerAcc = new Account(acc.getAccountId(),acc.getOwner(),acc.getPin(),newBuyerBalance,acc.getUserId());
this.ps.removeProduct(prd.getProductId());
webClientBuilder.build().put().uri("http://account-service/account/update/{accountId}",acc.getAccountId()).body(newOwnerAcc,Account.class).exchange();
return true;
}
return false;
}
However this does not work, not error just nothing updates.
My test case works when I run the same code with a test account. I'm not sure why this is not executing. Any suggestions?
you have to think of reactive code as event chains or callbacks. So you need to respond to what you want something to do, after some other thing has been completed.
return webClientBuilder.build()
.put().uri("http://account-service/account/update/{accountId}",
acc.getAccountId())
.body(newOwnerAcc,Account.class)
.exchange()
.thenReturn(true); // if you really need to return a boolean
return a boolean is usually not semantically correct in a reactive world. Its very common to try to avoid if-else statements
One way is to return a Mono<Void> to mark that something has been completed, and trigger something chained onto it.
public Mono<Void> isAccountBalanceGreater(Account acc, Product prd) {
return webclient.put()
.uri( ... )
.retrieve()
.bodyToMono(Void.class)
.doOnError( // handle error )
}
// How to call for example
isAccountBalanceGreater(foo, bar)
.doOnSuccess( ... )
.doOnError( ... )

Cancel Punctuator on Kafka Streams after is triggered

I create a scheduled punctuator on a transformer and I schedule it to run on a periodical basis (using kafka v2.1.0). Every time I accept a specific key I do create a new one like this
scheduled = context.schedule (Duration.ofMillis(scheduleTime),
PunctuationType.WALL_CLOCK_TIME,new CustomPunctuator(context, customStateStoreName));
My issue is that all these punctuators I create run constantly and I cannot find a way to cancel them. I found a snippet in the internet to use
private Cancellable scheduled;
#Override
public void init(PorcessorContext processContext) {
this.context = processorContext;
scheduled = context.schedule(TimeUnit.SECONDS.toMillis(5), PunctuationType.WALL_CLOCK_TIME,
this::punctuateCancel);
}
private void punctuateCancel(long timestamp) {
scheduled.cancel();
}
but this unfortunately seems to cancel only the latest created Punctuator.
I am editing my post just to give some further insight regarding my approach and how this is related with comments made by wardzinia. So my approach is pretty similar just uses a Map because I need to have only one punctuator active per event key so in my Transformer class I initiate
private Map<String,Cancellable> scheduled = new HashMap<>();
And on my transform method I do execute the code below
{
final Cancellable cancelSched = scheduled.get(recordKey);
// Every time I get a new event I cancel my previous Punctuator
// and schedule a new one ( context.schedule a few lines later)
if(cancelSched != null)
cancelSched.cancel();
// This is supposed to work like a closure by capturing the currentCancellable which in the next statement
// is moved to the scheduled map. Scheduled map at any point will have the only active Punctuator for a
// specific String as it is constantly renewed
// Note: Previous registered punctuators have already been cancelled as it can be seen by the previous
// statement (cancelSched.cancel();)
Cancellable currentCancellable = context.schedule(Duration.ofMillis(scheduleTime), PunctuationType.WALL_CLOCK_TIME,
new CustomPunctuator(context, recordKey ,()-> scheduled ));
// Update Active Punctuators for a specific key.
scheduled.put(recordKey,currentCancellable);
}
And I use that registered callback on my Punctuator punctuate method to cancel the last active Punctuator
after it has started. It seems to work (not sure though) but it feels very "hacky" and not the kind of solution
that it is certainly desirable.
So how can I cancel a punctuator after is triggered. Is there a way to cope with this issue ?
I think one thing you could do is the following:
class CustomPunctuator implements Punctuator {
final Cancellable schedule;
public void punctuate(final long timestamp) {
// business logic
if (/* do cancel */) {
schedule.cancel()
}
}
}
// registering a punctuation
{
final CustomPunctuator punctuation = new CustomPunctuator();
final Cancellable currentCancellable = context.schedule(
Duration.ofMillis(scheduleTime),
PunctuationType.WALL_CLOCK_TIME,
punctuation);
punctuation.schedule = currentCancellable;
}
This way, you don't need to maintain the HashMap and give each CustomPunctuator instance a way to cancel itself.
I had the same situation, just for the people interested in scala I handle it as
val punctuation = new myPunctuation()
val scheduled:Cancellable=context.schedule(Duration.ofSeconds(5), PunctuationType.WALL_CLOCK_TIME, punctuation)
punctuation.schedule=scheduled
The class
class myPunctuation() extends Punctuator{
var schedule: Cancellable = _
override def punctuate(timestamp: Long): Unit = {
println("hello")
schedule.cancel()
}
}
Works like a charm

Long living service with coroutines

I want to create a long living service that can handle events.
It receives events via postEvent, stores it in repository (with underlying database) and send batch of them api when there are enough events.
Also I'd like to shut it down on demand.
Furthermore I would like to test this service.
This is what I came up so far. Currently I'm struggling with unit testing it.
Either database is shut down prematurely after events are sent to service via fixture.postEvent() or test itself gets in some sort of deadlock (was experimenting with various context + job configurations).
What am I doing wrong here?
class EventSenderService(
private val repository: EventRepository,
private val api: Api,
private val serializer: GsonSerializer,
private val requestBodyBuilder: EventRequestBodyBuilder,
) : EventSender, CoroutineScope {
private val eventBatchSize = 25
val job = Job()
private val channel = Channel<Unit>()
init {
job.start()
launch {
for (event in channel) {
val trackingEventCount = repository.getTrackingEventCount()
if (trackingEventCount < eventBatchSize) continue
readSendDelete()
}
}
}
override val coroutineContext: CoroutineContext
get() = Dispatchers.Default + job
override fun postEvent(event: Event) {
launch(Dispatchers.IO) {
writeEventToDatabase(event)
}
}
override fun close() {
channel.close()
job.cancel()
}
private fun readSendDelete() {
try {
val events = repository.getTrackingEvents(eventBatchSize)
val request = requestBodyBuilder.buildFor(events).blockingGet()
api.postEvents(request).blockingGet()
repository.deleteTrackingEvents(events)
} catch (throwable: Throwable) {
Log.e(throwable)
}
}
private suspend fun writeEventToDatabase(event: Event) {
try {
val trackingEvent = TrackingEvent(eventData = serializer.toJson(event))
repository.insert(trackingEvent)
channel.send(Unit)
} catch (throwable: Throwable) {
throwable.printStackTrace()
Log.e(throwable)
}
}
}
Test
#RunWith(RobolectricTestRunner::class)
class EventSenderServiceTest : CoroutineScope {
#Rule
#JvmField
val instantExecutorRule = InstantTaskExecutorRule()
private val api: Api = mock {
on { postEvents(any()) } doReturn Single.just(BaseResponse())
}
private val serializer: GsonSerializer = mock {
on { toJson<Any>(any()) } doReturn "event_data"
}
private val bodyBuilder: EventRequestBodyBuilder = mock {
on { buildFor(any()) } doReturn Single.just(TypedJsonString.buildRequestBody("[ { event } ]"))
}
val event = Event(EventName.OPEN_APP)
private val database by lazy {
Room.inMemoryDatabaseBuilder(
RuntimeEnvironment.systemContext,
Database::class.java
).allowMainThreadQueries().build()
}
private val repository by lazy { database.getRepo() }
val fixture by lazy {
EventSenderService(
repository = repository,
api = api,
serializer = serializer,
requestBodyBuilder = bodyBuilder,
)
}
override val coroutineContext: CoroutineContext
get() = Dispatchers.Default + fixture.job
#Test
fun eventBundling_success() = runBlocking {
(1..40).map { Event(EventName.OPEN_APP) }.forEach { fixture.postEvent(it) }
fixture.job.children.forEach { it.join() }
verify(api).postEvents(any())
assertEquals(15, eventDao.getTrackingEventCount())
}
}
After updating code as #Marko Topolnik suggested - adding fixture.job.children.forEach { it.join() } test never finishes.
One thing you're doing wrong is related to this:
override fun postEvent(event: Event) {
launch(Dispatchers.IO) {
writeEventToDatabase(event)
}
}
postEvent launches a fire-and-forget async job that will eventually write the event to the database. Your test creates 40 such jobs in rapid succession and, while they're queued, asserts the expected state. I can't work out, though, why you assert 15 events after posting 40.
To fix this issue you should use the line you already have:
fixture.job.join()
but change it to
fixture.job.children.forEach { it.join() }
and place it lower, after the loop that creates the events.
I failed to take into account the long-running consumer job you launch in the init block. This invalidates the advice I gave above to join all children of the master job.
Instead you'll have to make a bit more changes. Make postEvent return the job it launches and collect all these jobs in the test and join them. This is more selective and avoids joining the long-living job.
As a separate issue, your batching approach isn't ideal because it will always wait for a full batch before doing anything. Whenever there's a lull period with no events, the events will be sitting in the incomplete batch indefinitely.
The best approach is natural batching, where you keep eagerly draining the input queue. When there's a big flood of incoming events, the batch will naturally grow, and when they are trickling in, they'll still be served right away. You can see the basic idea here.

Vert.x: How to wait for a future to complete

Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}

Quartz Scheduler sending email notification multiple time

I am using Quartz for scheduling job. Job is to send reminder email every day at some particular time say 11:00AM. I am able to send reminder mail successfully, but the problem is that it sends more than 1 mails at same time. Sometime it sends 8 mails for 1 reminder request, sometime it sends 5. It seems same job is executed multiple time.
Following is my code,
JobDetail job = JobBuilder.newJob(LmsJob.class)
.withIdentity("lmsJob", org.quartz.Scheduler.DEFAULT_GROUP)
.build();
JobDataMap map = job.getJobDataMap();
map.put("creditMonthlyLeaveBalance", creditMonthlyLeaveBalance);
map.put("dailyUpdationTask", dailyUpdation);
map.put("monthlyPayrollGenerationTask",
monthlyPayrollGenerationTask);
map.put("yearlyMaintenanceOfLeaveBalance",
yearlyMaintenanceOfLeaveBalance);
map.put("emailNotifier", emailNotifier);
try {
CronTrigger trigger = TriggerBuilder
.newTrigger()
.withIdentity("lmsJob", "lmsJobGroup")
.forJob(job)
.startAt(new Date(System.currentTimeMillis()))
.withSchedule(
CronScheduleBuilder
.cronSchedule("00 00 00 ? * *")).build();
scheduler.scheduleJob(job, trigger);
scheduler.start();
// scheduler.shutdown();
} catch (ParseException e) {
e.printStackTrace();
}
Please help me in this, let me know if anything else is needed from my side.
I do not know your entire code,which annotation you gave and stuff.So, I am guessing that you gave annotation like #QuartzEvery("3h"). As far I am guessing, your job is scheduled wrong.To make it run at a particular time of every day,try this...
QuartzManager implements Managed {
.
.
public void start() throws Exception {
.
.
QuartzDailyAt dailyAt = jobType.getAnnotation(QuartzDailyAt.class);
int hours[] = dailyAt.hours();
String hourString =
Arrays.stream(hours).mapToObj(String::valueOf).collect(joining(","));
String cronExpression = String.format("0 0 %s * * ?", hourString);
Trigger trigger = TriggerBuilder.
newTrigger().
startNow().
withSchedule(CronScheduleBuilder.cronSchedule(cronExpression).
inTimeZone(TimeZone.getTimeZone("IST"))).
build();
scheduler.scheduleJob(JobBuilder.newJob(jobType).build(), trigger);
scheduler.start();
.
.
}
.
}
And interface as
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface QuartzDailyAt {
int[] hours();
}
While running your job,add an annotation at the top of the class like
#QuartzDailyAt(hours = {7,8,9,15,16,17})
public class SomeJob extends QuartzJob {.....}
This gives you to run at every particular intervals of a particular time zone...(above it is IST)