When i use jobs and triggers to schedule message publishing, it works
val job = JobBuilder.newJob(classOf[ScheduledMessagePublisher]).withIdentity("Job", "Group").build()
val trigger: CronTrigger = TriggerBuilder.newTrigger()
.withIdentity("Trigger", "Group")
.withSchedule(CronScheduleBuilder.cronSchedule("0 33 10 11 JAN ? 2019"))
.forJob("Job", "Group")
.build
quartz.start()
quartz.scheduleJob(job, trigger)
But, when i use actors and QuartzSchedulerExtension, my code never fire when the time has come, logs just write batch acquisition of 0 triggers
val test = context.actorOf(Executor.props(client))
QuartzSchedulerExtension(context.system).createSchedule("Test", None, "0 33 10 11 JAN ? 2019")
QuartzSchedulerExtension(context.system).schedule("Test", test, Executor.PublishMessage)
i think problem in cron expression "0 33 10 11 JAN ? 2019" because when i use only seconds and minutes, it works "0 30 * * * ? *"
Your cron expression is correct.
But the default timezone for QuartzSchedulerExtension is UTC. Check the document here.
Hence, you explicitly need to specify your current timezone.
Here's the solution:
val test = context.actorOf(Executor.props(client))
QuartzSchedulerExtension(context.system).createSchedule("Test", None, "0 33 10 11 JAN ? 2019", None, TimeZone.getDefault)
QuartzSchedulerExtension(context.system).schedule("Test", test, Executor.PublishMessage)
Related
I use a scheduler in WildFly 9, with this EJBs:
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.ejb.Schedule;
I get loads of these warnings:
2020-01-21 12:35:59,000 WARN [org.jboss.as.ejb3] (EJB default - 6) WFLYEJB0043: A previous execution of timer [id=3e4ec2d2-cea9-43c2-8e80-e4e66593dc31 timedObjectId=FiloJobScheduler.FiloJobScheduler.FiskaldatenScheduler auto-timer?:true persistent?:false timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl#71518cd4 initialExpiration=null intervalDuration(in milli sec)=0 nextExpiration=Tue Jan 21 12:35:59 GMT+02:00 2020 timerState=IN_TIMEOUT info=null] is still in progress, skipping this overlapping scheduled execution at: Tue Jan 21 12:35:59 GMT+02:00 2020.
But when I measure the elapsed times, they are allways < 1 minute.
The Scheduling is:
#Schedule(second = "*", minute = "*/5", hour = "*", persistent = false)
Has anyone an idea what is going on?
A little logging would help you. This runs every second because that's what you're telling it to do with the second="*" section. If you want to only run every 5 minutes of every hour, change the schedule to:
#Schedule(minute = "*/5", hour="*", persistent = false)
Currently I am using WorkManager 1.0.0-alpha05. I set periodic Work Request using below code.
When interval is below 1 hr then In Oppo Realme (Android Version - 8.1.0 , ColorOSVersion V5.0)
job execute at 1 hr. When interval greater than 1 hr job execute at exact time . when interval is smaller than 1 hr then job execute at 1 hr.
Please let me know any log or information required :
Code For Schdule Periodic Job:
PeriodicWorkRequest uploadWork = new PeriodicWorkRequest.
Builder(LocationUpdatesJobService.class ,interval, TimeUnit.MILLISECONDS)
.addTag(Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC)
.setConstraints(constraints).build();
WorkManager.getInstance().enqueueUniquePeriodicWork(
Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC,
ExistingPeriodicWorkPolicy.REPLACE, uploadWork);
in all other device Periodic Work request interval is proper. In Oppo Realme 1 work execute at 1 hr.
Oppo Realme 1: Interval 15 Min
I debug Job Schduler using below command:
adb shell dumpsys jobscheduler
JOB #u0a249/18: cc2fc59 com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
u0a249 tag=job/com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
Source: uid=u0a249 user=0 pkg=com.cygneto.field_sales
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h0m0s0ms flex=+21m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Backoff: policy=1 initial=+30s0ms
Has early constraint
Has late constraint
Required constraints: TIMING_DELAY DEADLINE
Satisfied constraints: APP_NOT_IDLE DEVICE_NOT_DOZING
Unsatisfied constraints: TIMING_DELAY DEADLINE
Doze whitelisted: true
Tracking: TIME
Enqueue time: -9m4s617ms
Run time: earliest=+29m55s383ms, latest=+50m55s383ms
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Oppo Realme 1: Interval 1hr 10 Min
Log:
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h10m0s0ms flex=+1h10m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Doze whitelisted: true
Tracking: TIME
Enqueue time: -4m19s846ms
Run time: earliest=+1h5m39s833ms, latest=+2h15m39s833ms
Last successful run: 2018-07-25 17:01:23
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Other Device :
Log :
JobInfo:
Service:com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+15m0s0ms flex=+15m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Tracking: TIME
Enqueue time: -29s237ms
Run time: earliest=+14m30s690ms, latest=+29m30s690ms
Last successful run: 2018-07-25 17:29:19
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
I also try Using different library. I found same behavior in Job Scheduler and Android-Job.
job period length is 15 min but execute at 1 hr but when i try using firebase job dispatcher
job execute at correct 15 min interval time.
i debug Job Scheduler and Android-Job using below command:
adb shell dumpsys jobscheduler
Job Scheduler:
Interval : 15 Min
Output : 1 hr
Log:
JOB #u0a266/1: a0dd846 com.jobscheduler_periodic/com.periodic.JobSchedulerService
u0a266 tag=*job*/com.jobscheduler_periodic/com.periodic.JobSchedulerService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.periodic.JobSchedulerService
PERIODIC: interval=+1h0m0s0ms flex=+15m0s0ms
Android-Job:
Interval : 15 Min
Output : 1 hr:
Log:
JOB #u0a266/3: 10c0d65 com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
u0a266 tag=*job*/com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
PERIODIC: interval=+1h0m0s0ms flex=+5m0s0ms
Firebase Job Dispatcher:
I debug firebase job Dispatcher using below command:
adb shell "dumpsys activity service GcmService | grep com.jobscheduler_periodic"
Interval : 15 Min
Output : 15 min
Log:
u0|com.jobscheduler_periodic: 1
(scheduled) com.jobscheduler_periodic/com.firebase.jobdispatcher.GooglePlayReceiver{u=0 tag="MyJobService" trigger=window{s
tart=720s,end=900s,earliest=46s,latest=226s} requirements=[NET_CONNECTED,CHARGING] attributes=[RECURRING] scheduled=-673s last_
run=N/A jid=N/A status=PENDING retries=0 client_lib=FIREBASE_JOB_DISPATCHER-1}
This happens to be an OEM bug. Unfortunately, it is very hard to work around these kind of bugs, in a battery efficient way. If you want a period of 15 mins, I suggest using the following workaround:
Use a OneTimeWorkRequest instead of a periodic work request, and upon first execution of the first work request, schedule a second from inside the worker with an initialDelay of 15 mins. That will essentially give you what you want.
I have some trouble with very long subroutine with hypnotoad.
I need to run 1 minute subroutine (hardware connected requirements).
Firsly, i discover this behavior with:
my $var = 0;
Mojo::IOLoop->recurring(60 => sub {
$log->debug("starting loop: var: $var");
if ($var == 0) {
#...
#make some long (30 to 50 seconds) communication with external hardware
sleep 50; #fake reproduction of this long communication
$var = 1;
}
$log->debug("ending loop: var: $var");
})
Log:
14:13:45 2018 [debug] starting loop: var: 0
14:14:26 2018 [debug] ending loop: var: 1 #duration: 41 seconds
14:14:26 2018 [debug] starting loop: var: 0
14:15:08 2018 [debug] ending loop: var: 1 #duration: 42 seconds
14:15:08 2018 [debug] starting loop: var: 0
14:15:49 2018 [debug] ending loop: var: 1 #duration: 42 seconds
14:15:50 2018 [debug] starting loop: var: 0
14:16:31 2018 [debug] ending loop: var: 1 #duration: 41 seconds
...
3 problems:
1) Where do these 42 seconds come from? (yes, i know, 42 seconds is the answer of the universe...)
2) Why the IOLoop recuring loses his pace?
3) Why my variable is setted to 1, and just one second after, the if get a variable equal to 0?
When looped job needs 20 seconds or 25 seconds, no problem.
When looped job needs 60 secondes and used with morbo, no problem.
When looped job needs more than 40 seconds and used with hypnotoad (1 worker), this is the behavior explained here.
If i increase the "not needed" time (e.g. 120 seconds IOLoop for 60 seconds jobs, the behaviour is always the same.
It's not a problem about IOLoop, i can reproduce the same behavior outside loop.
I suspect a problem with worker killed and heart-beat, but i have no log about that.
I have TORQUE installed on Ubuntu 16.04, and I am having trouble because my jobs hang. I have a test script test.pbs:
#PBS -N test
#PBS -l nodes=1:ppn=1
#PBS -l walltime=0:01:00
cd $PBS_O_WORKDIR
touch done.txt
echo "done"
And I run it with
qsub test.pbs
The job writes done.txt and echoes "done" just fine, but the job hangs in the C state.
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
46.localhost test wlandau 00:00:00 C batch
Edit: some diagnostic info on another job from qstat -f 55
qstat -f 55
Job Id: 55.localhost
Job_Name = test
Job_Owner = wlandau#localhost
resources_used.cput = 00:00:00
resources_used.mem = 0kb
resources_used.vmem = 0kb
resources_used.walltime = 00:00:00
job_state = C
queue = batch
server = haggunenon
Checkpoint = u
ctime = Mon Oct 30 07:35:00 2017
Error_Path = localhost:/home/wlandau/Desktop/test.e55
exec_host = localhost/2
Hold_Types = n
Join_Path = n
Keep_Files = n
Mail_Points = a
mtime = Mon Oct 30 07:35:00 2017
Output_Path = localhost:/home/wlandau/Desktop/test.o55
Priority = 0
qtime = Mon Oct 30 07:35:00 2017
Rerunable = True
Resource_List.ncpus = 1
Resource_List.nodect = 1
Resource_List.nodes = 1:ppn=1
Resource_List.walltime = 00:01:00
session_id = 5115
Variable_List = PBS_O_QUEUE=batch,PBS_O_HOST=localhost,
PBS_O_HOME=/home/wlandau,PBS_O_LANG=en_US.UTF-8,PBS_O_LOGNAME=wlandau,
PBS_O_PATH=/home/wlandau/bin:/home/wlandau/.local/bin:/usr/local/sbin
:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/ga
mes:/snap/bin,PBS_O_SHELL=/bin/bash,PBS_SERVER=localhost,
PBS_O_WORKDIR=/home/wlandau/Desktop
comment = Job started on Mon Oct 30 at 07:35
etime = Mon Oct 30 07:35:00 2017
exit_status = 0
submit_args = test.pbs
start_time = Mon Oct 30 07:35:00 2017
Walltime.Remaining = 60
start_count = 1
fault_tolerant = False
comp_time = Mon Oct 30 07:35:00 2017
And a similar tracejob -n2 62:
/var/spool/torque/server_priv/accounting/20171029: No matching job records located
/var/spool/torque/server_logs/20171029: No matching job records located
/var/spool/torque/mom_logs/20171029: No matching job records located
/var/spool/torque/sched_logs/20171029: No matching job records located
Job: 62.localhost
10/30/2017 17:20:25 S enqueuing into batch, state 1 hop 1
10/30/2017 17:20:25 S Job Queued at request of wlandau#localhost, owner =
wlandau#localhost, job name = jobe945093c2e029c5de5619d6bf7922071,
queue = batch
10/30/2017 17:20:25 S Job Modified at request of Scheduler#Haggunenon
10/30/2017 17:20:25 S Exit_status=0 resources_used.cput=00:00:00 resources_used.mem=0kb
resources_used.vmem=0kb resources_used.walltime=00:00:00
10/30/2017 17:20:25 L Job Run
10/30/2017 17:20:25 S Job Run at request of Scheduler#Haggunenon
10/30/2017 17:20:25 S Not sending email: User does not want mail of this type.
10/30/2017 17:20:25 S Not sending email: User does not want mail of this type.
10/30/2017 17:20:25 M job was terminated
10/30/2017 17:20:25 M obit sent to server
10/30/2017 17:20:25 A queue=batch
10/30/2017 17:20:25 M scan_for_terminated: job 62.localhost task 1 terminated, sid=17917
10/30/2017 17:20:25 A user=wlandau group=wlandau
jobname=jobe945093c2e029c5de5619d6bf7922071 queue=batch
ctime=1509398425 qtime=1509398425 etime=1509398425 start=1509398425
owner=wlandau#localhost exec_host=localhost/0 Resource_List.ncpus=1
Resource_List.neednodes=1 Resource_List.nodect=1
Resource_List.nodes=1 Resource_List.walltime=01:00:00
10/30/2017 17:20:25 A user=wlandau group=wlandau
jobname=jobe945093c2e029c5de5619d6bf7922071 queue=batch
ctime=1509398425 qtime=1509398425 etime=1509398425 start=1509398425
owner=wlandau#localhost exec_host=localhost/0 Resource_List.ncpus=1
Resource_List.neednodes=1 Resource_List.nodect=1
Resource_List.nodes=1 Resource_List.walltime=01:00:00 session=17917
end=1509398425 Exit_status=0 resources_used.cput=00:00:00
resources_used.mem=0kb resources_used.vmem=0kb
resources_used.walltime=00:00:00
EDIT: jobs now hanging in E
After some tinkering, I am now using these settings. I have moved on to this tiny pipeline workflow, where some TORQUE jobs wait for other TORQUE jobs to finish. Unfortunately, all the jobs hang in the E state, and any number of jobs more than 4 will just stay queued. To keep things from hanging indefinitely, I have to sudo qdel -p each one, which I think is causing legitimate problems with the project's filesystem as well as an inconvenience.
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
113.localhost ...b73ec2cda6dca wlandau 00:00:00 E batch
114.localhost ...b6c8e6da05983 wlandau 00:00:00 E batch
115.localhost ...9123b8e20850b wlandau 00:00:00 E batch
116.localhost ...e6d49a3d7d822 wlandau 00:00:00 E batch
117.localhost ...8c3f6cb68927b wlandau 0 Q batch
118.localhost ...40b1d0cab6400 wlandau 0 Q batch
qmgr -c "list server" shows
Server haggunenon
server_state = Active
scheduling = True
max_running = 300
total_jobs = 5
state_count = Transit:0 Queued:1 Held:0 Waiting:0 Running:1 Exiting:3
acl_hosts = localhost
managers = root#localhost
operators = root#localhost
default_queue = batch
log_events = 511
mail_from = adm
query_other_jobs = True
resources_assigned.ncpus = 4
resources_assigned.nodect = 4
scheduler_iteration = 600
node_check_rate = 150
tcp_timeout = 6
mom_job_sync = True
pbs_version = 2.4.16
keep_completed = 0
submit_hosts = SERVER
allow_node_submit = True
next_job_number = 119
net_counter = 118 94 93
And qmgr -c "list queue batch"
Queue batch
queue_type = Execution
total_jobs = 5
state_count = Transit:0 Queued:1 Held:0 Waiting:0 Running:0 Exiting:4
max_running = 300
resources_max.ncpus = 4
resources_max.nodes = 2
resources_min.ncpus = 1
resources_default.ncpus = 1
resources_default.nodect = 1
resources_default.nodes = 1
resources_default.walltime = 01:00:00
mtime = Wed Nov 1 07:40:45 2017
resources_assigned.ncpus = 4
resources_assigned.nodect = 4
keep_completed = 0
enabled = True
started = True
C state means the job has completed and its status is kept in the system. Usually the status is kept after job completion for a period of time specified by the keep_completed parameter. However certain types of failure may result in the job being kept in this state to provide the information necessary to examine the cause of failure.
Check the output of qstat -f 46 to see if there is anything indicating an error.
To tune the keep_completed parameter you can execute the following command to check the value of this parameter on your system.
qmgr -c "print queue batch keep_completed"
If you have administrative privileges on the Torque server you could also change this value with
qmgr -c "set queue batch keep_completed=120"
To keep jobs in completed state for 2 minutes after completion.
In general having keep_completed set is a useful feature. Advanced schedulers use the information on completed jobs to schedule around failures.
I thought that using futures would easily allow me to to fire off one shot code blocks, however it seems I can only have 4 futures at a time.
Where does this restriction come from, or am I abusing Futures by using it like this?
import scala.concurrent._
import ExecutionContext.Implicits.global
import scala.util.{Failure, Success}
import java.util.Calendar
object Main extends App{
val rand = scala.util.Random
for (x <- 1 to 100) {
val f = Future {
//val sleepTime = rand.nextInt(1000)
val sleepTime = 2000
Thread.sleep(sleepTime)
val today = Calendar.getInstance().getTime()
println("Future: " + x + " - sleep was: " + sleepTime + " - " + today)
1;
}
}
Thread.sleep(10000)
}
Output:
Future: 3 - sleep was: 2000 - Mon Aug 31 10:02:44 CEST 2015
Future: 2 - sleep was: 2000 - Mon Aug 31 10:02:44 CEST 2015
Future: 4 - sleep was: 2000 - Mon Aug 31 10:02:44 CEST 2015
Future: 1 - sleep was: 2000 - Mon Aug 31 10:02:44 CEST 2015
Future: 7 - sleep was: 2000 - Mon Aug 31 10:02:46 CEST 2015
Future: 5 - sleep was: 2000 - Mon Aug 31 10:02:46 CEST 2015
Future: 6 - sleep was: 2000 - Mon Aug 31 10:02:46 CEST 2015
Future: 8 - sleep was: 2000 - Mon Aug 31 10:02:46 CEST 2015
Future: 9 - sleep was: 2000 - Mon Aug 31 10:02:48 CEST 2015
Future: 11 - sleep was: 2000 - Mon Aug 31 10:02:48 CEST 2015
Future: 10 - sleep was: 2000 - Mon Aug 31 10:02:48 CEST 2015
Future: 12 - sleep was: 2000 - Mon Aug 31 10:02:48 CEST 2015
Future: 16 - sleep was: 2000 - Mon Aug 31 10:02:50 CEST 2015
Future: 13 - sleep was: 2000 - Mon Aug 31 10:02:50 CEST 2015
Future: 15 - sleep was: 2000 - Mon Aug 31 10:02:50 CEST 2015
Future: 14 - sleep was: 2000 - Mon Aug 31 10:02:50 CEST 2015
I expected them to all show the same time.
To give some context, I thought I could use this construct and extend it by having a main loop, in which it sleeps every loop according to a value drawn from a exponential disitribution , to emulate user arrival/execution of a query. After each sleep I'd like to execute the query by sending it to the program's driver (in this case Spark, and the driver allows for multiple threads using it.) Is there a more obvious way than to use Futures?
When you are using using import ExecutionContext.Implicits.global,
It creates thread pool which has the same size of the number of CPUs.
From the source of the ExecutionContext.scala
The default ExecutionContext implementation is backed by a work-stealing thread pool. By default,
the thread pool uses a target number of worker threads equal to the number of [[https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#availableProcessors-- available processors]].
And there's good StackOverflow question: What is the behavior of scala.concurrent.ExecutionContext.Implicits.global?
Since the default size of the thread pool depends on number of CPUs, if you want to use larger thread pool, you have to write something like
import scala.concurrent.ExecutionContext
import java.util.concurrent.Executors
implicit val ec = ExecutionContext.fromExecutorService(Executors.newWorkStealingPool(8))
before executing the Future.
( In your code, you have to place it before for loop. )
Note that work stealing pool was added in java 8, scala has their own ForkJoinPool which does the work stealing: scala.concurrent.forkjoin.ForkJoinPool vs java.util.concurrent.ForkJoinPool
Also if you want one thread per Future, you can write something like
implicit val ec = ExecutionContext.fromExecutorService(Executors.newSingleThreadExecutor)
Therefore, the following code executes 100 threads in parallel
import scala.concurrent._
import java.util.concurrent.Executors
object Main extends App{
for (x <- 1 to 100) {
implicit val ec = ExecutionContext.fromExecutorService(Executors.newSingleThreadExecutor)
val f = Future {
val sleepTime = 2000
Thread.sleep(sleepTime)
val today = Calendar.getInstance().getTime()
println("Future: " + x + " - sleep was: " + sleepTime + " - " + today)
1;
}
}
Thread.sleep(10000)
}
In addition to work stealing thread pool and single thread executors, there's some other executors: http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html
Read the docs for detail:
http://docs.scala-lang.org/overviews/core/futures.html
The default pool when using import scala.concurrent.ExecutionContext.Implicits.global indeed has as many threads as you have cores on your machine. This is ideal for non-blocking code (no synchronous io/sleep/...) but can be problematic and even cause deadlocks when you use it for blocking code.
However, this pool can actually grow if you mark blocking code in a scala.concurrent.blocking block. The same marker is for example in use when you are using Await.result and Await.ready functions that block while waiting for a Future.
see the api docs for blocking
So all you have to do is update your example:
import scala.concurrent.blocking
...
val sleepTime = 2000
blocking{
Thread.sleep(sleepTime)
}
...
Now all futures will end after 2000 ms
you can also use
`implicit val ec = ExecutionContext.fromExecutorService(ExecutorService.newFixedThreadPool(NUMBEROFTHREADSYOUWANT))`
in NUMBEROFTHREADSYOUWANT you can give number of threads want to start.
This will use before Future .