I’m trying to programmatically run some SSRS subscriptions one after the other. The reports are all long running and consistently fail if triggered at the same time. At the moment we have about four different subscriptions spread out through the day ensuring that they don’t clash. Unfortunately this can waste quite a lot of time.
The solution I have for this is to create a subscription that is not scheduled to run on all each of the reports in question and then get one job to trigger each subscription one after the other once each has finished running:
One job triggers the first subscription
Using a WAITFOR command give a few seconds for the subscription to run.
Using WAITFOR command check periodically that the subscription is running (‘Pending’)
When the WAITFOR check finds that the report has been sent the job triggers the next subscription
and so on....
I know the code to trigger the subscription:
exec [ReportServerWSS].dbo.AddEvent #EventType='SharedSchedule', #EventData='011e83ff-344a-416a-83cb-1a9281e4205b'
I just need to know how to use the WAITFOR whilst doing a check and then respond to the results of the check.
Based on the information in your question I'd definitely say you have an XY-problem: you should really work on the performance and/or locking strategy of your queries and reports.
Having said that, if you insist "solving" the issue by running reports serially, I'd probably not use built-in subscriptions but go for a more custom solution so you have the control you desire. Create your own app, script, or task that utilizes the SOAP API, call the Render methods on your report one at a time, waiting for each report to finish before starting the next. If you haven't already, set the execution timeout to something high enough for your reports to finish nicely.
OK - I've figured this out:
DECLARE #SubscriptionStatus AS nvarchar(260)
EXEC msdb.dbo.sp_start_job '4B7FA89E-0B56-4ED1-9A0F-37E5D03318CB' /*First long running report*/
WAITFOR DELAY '00:00:30'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = 'E156FD91-E7F9-43EC-8B73-28622834EACB')
CheckSubscription1:
IF #SubscriptionStatus = 'Pending'
BEGIN
PRINT 'The First long running Subscription is still running'
WAITFOR DELAY '00:01:00'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = 'E156FD91-E7F9-43EC-8B73-28622834EACB')
END
IF #SubscriptionStatus = 'Pending'
GOTO CheckSubscription1
ELSE
EXEC msdb.dbo.sp_start_job '6D3300BC-ACA9-4EEE-A5F9-546635B585E0' /*Second long running report*/
WAITFOR DELAY '00:00:30'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = 'AD54215A-5B7F-48B2-81B2-52C299875AD6')
CheckSubscription2:
IF #SubscriptionStatus = 'Pending'
BEGIN
PRINT 'The second long running Subscription is still running'
WAITFOR DELAY '00:01:00'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = 'AD54215A-5B7F-48B2-81B2-52C299875AD6')
END
IF #SubscriptionStatus = 'Pending'
GOTO CheckSubscription2
ELSE
EXEC msdb.dbo.sp_start_job '84FD876A-1945-405E-A344-6279E27DCD68' /*Third long running report*/
WAITFOR DELAY '00:00:30'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = '8F446935-5450-458F-9076-7AD9FC78D456')
CheckSubscription3:
IF #SubscriptionStatus = 'Pending'
BEGIN
PRINT 'The third long running Subscription is still running'
WAITFOR DELAY '00:01:00'
SET #SubscriptionStatus = (SELECT LastStatus FROM Subscriptions Where subscriptionid = '8F446935-5450-458F-9076-7AD9FC78D456')
END
IF #SubscriptionStatus = 'Pending'
GOTO CheckSubscription3
ELSE
PRINT 'All long running report run'
Thanks,
UT
Related
I got a scenario where I have a “pre-cook” procedure by date. If the data for that date is cooked, just return it, otherwise that procedure will cook it.
And the problem is that if the cooking process taking too long, there will be a chance that data will be duplicated.
I would be expect this workflow:
User A opens a session from the web-app and requests data for 2018-June, a procedure called proc_A will check the data for that month and cook it if it does not yet exist.
User B opens another session from the desktop-app and requests the same data for 2018-June, then they should get a message saying that the data is cooking, please wait.
Is it possible to achieve that by only doing changes in the PostgreSQL DB rather than making changes to the web-app and the desktop-app?
I would add a state column to the data table:
ready boolean DEFAULT FALSE
The workflow would be as follows:
INSERT INTO data (month, value, ready)
VALUES (date_trunc('month', current_timestamp)::date, NULL, FALSE)
ON CONFLICT (month) DO NOTHING;
If a row gets inserted, proceed to cook the value, then run
UPDATE data SET
value = 42, ready = TRUE
WHERE month = date_trunc('month', current_date)::date;
If no row gets inserted by the first statement, run
SELECT value, ready
FROM data
WHERE month = date_trunc('month', current_date)::date;
If ready is true, return the data, if not, tell the client to please wait.
My case is: If the sum of withdraw of my bank account is greater than 1000$ within any continues 10 mins, E.g., 0m-10m and then 0m1s-10m1s, then 0m2s-10m2s, which is a sliding time window, the bank system should send me a warning message.
So, can anyone help me writing the rule by Drools?
My initial idea is below:
when
Number( $total : intValue, intValue >= 1000)
from accumulate (Withdraw ($money : money)
over window:time( 10m )
from entry-point ATMEntry,
sum($money))
then
System.out.println("Warning! too more withdraw:"+$total);
However, it will just check the upcoming 10m for one time. After the first 10m, no matter how many withdraw object that I insert to ATMEntry are, I will not receive the warning message.
And if I fire above rule interval in different session, for example every 1m, it makes me confused about how to insert withdraw object to ATMEntry for different session.
So, is that possible to use Drools to in my case?
Thanks,
You have to trigger the evaluation by another Withdraw event:
when
Withdraw()
Number( $total : intValue >= 1000)
from accumulate (Withdraw ($money : money)
over window:time( 10m )
from entry-point ATMEntry,
sum($money))
then
System.out.println( "Warning! " + $total );
end
If you need the individual events it might be better to collect them into a list. This can also help to close one interval with excess withdrawal and open another one. It depends on the details of the spec: when to raise an alaram, and when to raise - or raise not - the next alarm.
I am using Delayed Job as my queuing backend for Active Job. I have not set up any custom jobs and plan on using Action Mailer to send out scheduled emails asynchronously. How can I prevent a scheduled email from being sent out?
For example, suppose the user can set up email reminders on my application. If the user sets up a reminder for three days in the future, a job will be created. If the user removes that email reminder, the job should be deleted.
Any help would be greatly appreciated!
I decided to approach the problem differently. Instead of scheduling a job and destroying it, I'm just simply scheduling a job.
When the user sets up a reminder for three days in the future, a job is created, just as before. If the user removes the email reminder, the job will still run in three days but won't do anything. Here's an example setup:
# controllers/users_controller.rb
def schedule_reminder
# Get the user that scheduled the reminder
user = User.find_by_id(params[:id])
# Create a reminder for the user
reminder = user.reminders.create(active: true)
# Schedule the reminder to be sent out
SendReminderJob.set(wait_until: 3.days.from_now).perform_later(reminder.id)
end
def unschedule_reminder
reminder = Reminder.find_by_id(params[:reminder_id])
reminder.destroy
end
When the user schedules a reminder, def schedule_reminder is executed. This method creates a reminder and schedules a job that will run in 3 days. It passes in the reminder's ID as an argument so the job can retrieve the reminder when it runs.
When the user deletes the reminder, def unschedule_reminder is executed. This method finds the reminder and deletes it.
Here's what my SendReminderJob job looks like:
# jobs/send_reminder_job.rb
class SendReminderJob < ActiveJob::Base
queue_as :default
def perform(*args)
# Get the reminder
# args.first is the reminder ID
reminder = Reminder.find_by_id(args.first)
# Check if the reminder exists
if !reminder.nil?
# Send the email to the user
end
end
end
When this job runs in three days, it checks to see if the reminder is still set. If it is, it sends out an email to the user. Otherwise, it does nothing. Regardless of whether or not it does anything, this job is deleted after three days!
I created app/models/delayed_backend_mongoid_job.rb.
class DelayedBackendMongoidJob
include Mongoid::Document
field :priority, type: Integer
field :attempts, type: Integer
field :queue, type: String
field :handler, type: String
field :run_at, type: DateTime
field :created_at, type: DateTime
field :updated_at, type: DateTime
end
If you are using ActiveRecord you need to adjust the model file. Then I installed rails_admin and I can view/edit any of the records in that table.
If you have a job scheduled to run in 2 days all you have to do is delete the record and DJ will never pick it up.
I have an BPM application where I am polling some rows from DB2 database at every 5 mins with a scheduler R1 with below query -
- select * from Table where STATUS = 'New'
based on rows returned I do some processing and then change the status of these rows to 'Read'.
But while this processing is being completed, its takes more than 5 mins and in meanwhile scheduler R1 runs and picks up some of the cases already picked up in last run.
How can I ensure that every scheduler picks up the rows which were not selected in last run. What changes do i need to do in my select statement? Please hep.
How can I ensure that every scheduler picks up the rows which were not selected in last run
You will need to make every scheduler aware of what was selected by other schedulers. You can do this, for example, by locking the selected rows (SELECT ... FOR UPDATE). Of course, you will then need to handle lock timeouts.
Another option, allowing for better concurrency, would be to update the record status before processing the records. You might introduce an intermediary status, something like 'In progress', and include the status in the query condition.
When I execute a query and right click in the results area, I get a pop-up menu with the following options:
Save Grid as Report ...
Single Record View ...
Count Rows ...
Find/Highlight ...
Export ...
If I select "Count Rows", is there a way to interrupt the operation if it starts taking too long?
No, you don't seem to be able to.
When you select Count Rows from the context menu, it runs the count on the main UI thread, hanging the whole UI, potentially for minutes or hours.
It's best not to use that feature - just put select count (*) from ( < your query here>) which it runs properly on separate thread which can be cancelled.
You can open an new instance of sql developer and kill the session counting the rows.
I do suggest using the select count(*) query though as it is less painful in the long run.