Oppo Realme1 Job Schduler Min interval for Periodic Work is 1 hr - android-jobscheduler

Currently I am using WorkManager 1.0.0-alpha05. I set periodic Work Request using below code.
When interval is below 1 hr then In Oppo Realme (Android Version - 8.1.0 , ColorOSVersion V5.0)
job execute at 1 hr. When interval greater than 1 hr job execute at exact time . when interval is smaller than 1 hr then job execute at 1 hr.
Please let me know any log or information required :
Code For Schdule Periodic Job:
PeriodicWorkRequest uploadWork = new PeriodicWorkRequest.
Builder(LocationUpdatesJobService.class ,interval, TimeUnit.MILLISECONDS)
.addTag(Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC)
.setConstraints(constraints).build();
WorkManager.getInstance().enqueueUniquePeriodicWork(
Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC,
ExistingPeriodicWorkPolicy.REPLACE, uploadWork);
in all other device Periodic Work request interval is proper. In Oppo Realme 1 work execute at 1 hr.
Oppo Realme 1: Interval 15 Min
I debug Job Schduler using below command:
adb shell dumpsys jobscheduler
JOB #u0a249/18: cc2fc59 com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
u0a249 tag=job/com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
Source: uid=u0a249 user=0 pkg=com.cygneto.field_sales
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h0m0s0ms flex=+21m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Backoff: policy=1 initial=+30s0ms
Has early constraint
Has late constraint
Required constraints: TIMING_DELAY DEADLINE
Satisfied constraints: APP_NOT_IDLE DEVICE_NOT_DOZING
Unsatisfied constraints: TIMING_DELAY DEADLINE
Doze whitelisted: true
Tracking: TIME
Enqueue time: -9m4s617ms
Run time: earliest=+29m55s383ms, latest=+50m55s383ms
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Oppo Realme 1: Interval 1hr 10 Min
Log:
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h10m0s0ms flex=+1h10m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Doze whitelisted: true
Tracking: TIME
Enqueue time: -4m19s846ms
Run time: earliest=+1h5m39s833ms, latest=+2h15m39s833ms
Last successful run: 2018-07-25 17:01:23
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Other Device :
Log :
JobInfo:
Service:com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+15m0s0ms flex=+15m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Tracking: TIME
Enqueue time: -29s237ms
Run time: earliest=+14m30s690ms, latest=+29m30s690ms
Last successful run: 2018-07-25 17:29:19
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
I also try Using different library. I found same behavior in Job Scheduler and Android-Job.
job period length is 15 min but execute at 1 hr but when i try using firebase job dispatcher
job execute at correct 15 min interval time.
i debug Job Scheduler and Android-Job using below command:
adb shell dumpsys jobscheduler
Job Scheduler:
Interval : 15 Min
Output : 1 hr
Log:
JOB #u0a266/1: a0dd846 com.jobscheduler_periodic/com.periodic.JobSchedulerService
u0a266 tag=*job*/com.jobscheduler_periodic/com.periodic.JobSchedulerService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.periodic.JobSchedulerService
PERIODIC: interval=+1h0m0s0ms flex=+15m0s0ms
Android-Job:
Interval : 15 Min
Output : 1 hr:
Log:
JOB #u0a266/3: 10c0d65 com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
u0a266 tag=*job*/com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
PERIODIC: interval=+1h0m0s0ms flex=+5m0s0ms
Firebase Job Dispatcher:
I debug firebase job Dispatcher using below command:
adb shell "dumpsys activity service GcmService | grep com.jobscheduler_periodic"
Interval : 15 Min
Output : 15 min
Log:
u0|com.jobscheduler_periodic: 1
(scheduled) com.jobscheduler_periodic/com.firebase.jobdispatcher.GooglePlayReceiver{u=0 tag="MyJobService" trigger=window{s
tart=720s,end=900s,earliest=46s,latest=226s} requirements=[NET_CONNECTED,CHARGING] attributes=[RECURRING] scheduled=-673s last_
run=N/A jid=N/A status=PENDING retries=0 client_lib=FIREBASE_JOB_DISPATCHER-1}

This happens to be an OEM bug. Unfortunately, it is very hard to work around these kind of bugs, in a battery efficient way. If you want a period of 15 mins, I suggest using the following workaround:
Use a OneTimeWorkRequest instead of a periodic work request, and upon first execution of the first work request, schedule a second from inside the worker with an initialDelay of 15 mins. That will essentially give you what you want.

Related

Cron Function on AWS Lambda by Serverless runs on alpha stage instaed of prod?

I already try with serverless.yml file by writing custom & it's active for prod only,but It's running on prod as well as alpha.
custom field is as below:
custom:
defaultStage: dev
enabled:
alpha: false
dev: false
prod: true
cron function is as below:
sendData:
handler: sendData.handler
enabled: ${self:custom.enabled.${self:provider.stage}}
events:
- schedule:
rate: cron(30 1 ? * MON *)
description: 'Runs every Monday at 7:00 AM'
These two stages from different account, When I was trying deploying on prod it's run properly but in case of stage alpha deployment it's remain active for alpha, which I already set as false.
You have set the enabled flag on the function, however it should be on the schedule event:
sendData:
handler: sendData.handler
events:
- schedule:
rate: cron(30 1 ? * MON *)
enabled: ${self:custom.enabled.${self:provider.stage}}
description: 'Runs every Monday at 7:00 AM'

Wildlfy Schedule overlapping

I use a scheduler in WildFly 9, with this EJBs:
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.ejb.Schedule;
I get loads of these warnings:
2020-01-21 12:35:59,000 WARN [org.jboss.as.ejb3] (EJB default - 6) WFLYEJB0043: A previous execution of timer [id=3e4ec2d2-cea9-43c2-8e80-e4e66593dc31 timedObjectId=FiloJobScheduler.FiloJobScheduler.FiskaldatenScheduler auto-timer?:true persistent?:false timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl#71518cd4 initialExpiration=null intervalDuration(in milli sec)=0 nextExpiration=Tue Jan 21 12:35:59 GMT+02:00 2020 timerState=IN_TIMEOUT info=null] is still in progress, skipping this overlapping scheduled execution at: Tue Jan 21 12:35:59 GMT+02:00 2020.
But when I measure the elapsed times, they are allways < 1 minute.
The Scheduling is:
#Schedule(second = "*", minute = "*/5", hour = "*", persistent = false)
Has anyone an idea what is going on?
A little logging would help you. This runs every second because that's what you're telling it to do with the second="*" section. If you want to only run every 5 minutes of every hour, change the schedule to:
#Schedule(minute = "*/5", hour="*", persistent = false)

Airflow tasks timeout in one hour even the setting is larger than 1 hour

Currently I'm using airflow with celery-executor+redis to run dags, and I have set execution_timeout to be 12 hours in a S3 key sensor, but it will fail in one hour in each retry
I have tried to update visibility_timeout = 64800 in airflow.cfg but the issue still exist
file_sensor = CorrectedS3KeySensor(
task_id = 'listen_for_file_drop', dag = dag,
aws_conn_id = 'aws_default',
poke_interval = 15,
timeout = 64800, # 18 hours
bucket_name = EnvironmentConfigs.S3_SFTP_BUCKET_NAME,
bucket_key = dag_config[ConfigurationConstants.FILE_S3_PATTERN],
wildcard_match = True,
execution_timeout = timedelta(hours=12)
)
For my understanding, execution_timeout should work that it will last for 12 hours after total four times run (retry = 3). But the issue is for each retry, it will fail in an hour and it only last total 4 hours+
[2019-08-06 13:00:08,597] {{base_task_runner.py:101}} INFO - Job 9: Subtask listen_for_file_drop [2019-08-06 13:00:08,595] {{timeout.py:41}} ERROR - Process timed out
[2019-08-06 13:00:08,612] {{models.py:1788}} ERROR - Timeout
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1652, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/sensors/base_sensor_operator.py", line 97, in execute
while not self.poke(context):
File "/usr/local/airflow/dags/ProcessingStage/sensors/sensors.py", line 91, in poke
time.sleep(30)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/timeout.py", line 42, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: Timeout
I figure it out a few days before.
Since I'm using AWS to deploy airflow with celery executor, there's a few improper cloudwatch alarm will keep scale up and down the workers and webserver/scheuler :(
After those alarms updated, it works well now!!

Not running RabbitMQ on Linux, can not find the file asn1.app

I installed on CentOs successfully ever. However, here is another CentOs I used, and it failed to stared rabbitMq.
My erlang from here.
[rabbitmq-erlang]
name=rabbitmq-erlang
baseurl=https://dl.bintray.com/rabbitmq/rpm/erlang/20/el/7
gpgcheck=1
gpgkey=https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc
repo_gpgcheck=0
enabled=1
this is my erl_crash.dump.
erl_crash_dump:0.5
Sat Jun 23 09:17:30 2018
Slogan: init terminating in do_boot ({error,{no such file or directory,asn1.app}})
System version: Erlang/OTP 20 [erts-9.3.3] [source] [64-bit] [smp:24:24] [ds:24:24:10] [async-threads:384] [hipe] [kernel-poll:true]
Compiled: Tue Jun 19 22:25:03 2018
Taints: erl_tracer,zlib
Atoms: 14794
Calling Thread: scheduler:2
=scheduler:1
Scheduler Sleep Info Flags: SLEEPING | TSE_SLEEPING | WAITING
Scheduler Sleep Info Aux Work:
Current Port:
Run Queue Max Length: 0
Run Queue High Length: 0
Run Queue Normal Length: 0
Run Queue Low Length: 0
Run Queue Port Length: 0
Run Queue Flags: OUT_OF_WORK | HALFTIME_OUT_OF_WORK
Current Process:
=scheduler:2
Scheduler Sleep Info Flags:
Scheduler Sleep Info Aux Work: THR_PRGR_LATER_OP
Current Port:
Run Queue Max Length: 0
Run Queue High Length: 0
Run Queue Normal Length: 0
Run Queue Low Length: 0
Run Queue Port Length: 0
Run Queue Flags: OUT_OF_WORK | HALFTIME_OUT_OF_WORK | NONEMPTY | EXEC
Current Process: <0.0.0>
Current Process State: Running
Current Process Internal State: ACT_PRIO_NORMAL | USR_PRIO_NORMAL | PRQ_PRIO_NORMAL | ACTIVE | RUNNING | TRAP_EXIT | ON_HEAP_MSGQ
Current Process Program counter: 0x00007fbd81fa59c0 (init:boot_loop/2 + 64)
Current Process CP: 0x0000000000000000 (invalid)
how to identify this problem ? Thank you.

How to suppress verbose error message in powershell?

Please, observe:
And here is the transcript:
PS C:\Dayforce\854\db\SQL\ClientDB> $job = Start-Job { C:\Dayforce\854\db\tools\dbupgrade\DbUpgrade.exe -d C:\Dayforce\854\db\SQL\ClientDB -db 854_dfadminportal }
PS C:\Dayforce\854\db\SQL\ClientDB> C:\Dayforce\854\db\tools\dbupgrade\DbUpgrade.exe -d C:\Dayforce\854\db\SQL\ClientDB -db 854_dfadminportal
Using timeout of 30 minutes per step.
Analyzing target DB state ...
Parsing upgrade steps ...
Found 1 new upgrade step(s). Upgrading the database ...
Dropping 2 runtime code items ...
Failed to apply DFVersion 20000
20000.sql: Divide by zero error encountered. (8134, 1)
Total Step SQL duration = 00:00:00.000
Total Step Round Trip duration = 00:00:00.035, average = 00:00:00.035
Total duration = 00:00:00.357
PS C:\Dayforce\854\db\SQL\ClientDB> Receive-Job $job.Id
Using timeout of 30 minutes per step.
Analyzing target DB state ...
Parsing upgrade steps ...
Found 1 new upgrade step(s). Upgrading the database ...
Dropping 2 runtime code items ...
Failed to apply DFVersion 20000
+ CategoryInfo : NotSpecified: (Failed to apply DFVersion 20000:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
+ PSComputerName : localhost
20000.sql: Divide by zero error encountered. (8134, 1)
Total Step SQL duration = 00:00:00.000
Total Step Round Trip duration = 00:00:00.031, average = 00:00:00.031
Total duration = 00:00:00.339
PS C:\Dayforce\854\db\SQL\ClientDB>
All I am doing is running the same executable. Once directly, once from within a background job. When receiving the output of the background job, extra error information is output. How can I get rid of it?