I would like an api call to be done immediately, then after 1sec, then 2sec, etc in incrementations of 1 sec up to 10.
I tried something like this:
Meteor.startup ->
counter = 1000
Meteor.setInterval (->
Meteor.call "call_url", url, (err, result) ->
...
if counter < 10000
counter += 1000
console.log counter
), counter
While my counter is incremented, the log is done exactly every seconds, which means that the setInterval doesn't keep track of the value.
The only way I see to handle this would be having 9 setTimeout to call the api at different times, then a Meteor.setInterval starting after all the timeouts... Sounds very ugly.
Any suggestions on how to do this in a clean way? It's important for the user to see frequent updates when he just connects to the page, but if he decides to let it open for a while, there is no need to perform that api query so often.
A more general solution (and scalable in the future for CoinsManager) is to use a queuing package that supports scheduling events in the future. I've looked at a bunch of background task management for Meteor, and queue supports scheduling.
Not quite clear what you are asking and there are probably a lot of ways to generate the intervals you want.
In javascript it might look like this:
var doStuff = function(){...};
var intervals = [ 1000, 3000, 6000, 10000, 15000, 21000, 28000, 36000, 45000, 55000 ];
doStuff(); //run immediately
intervals.forEach( function( interval ){
Meteor.setTimeout( doStuff, interval );
});
Related
I'm new to Kafka Streams.
I use the suppress method of KTable in order to handle only the final result of a window like this:
myStream
.windowedBy(TimeWindows.of(Duration.ofSeconds(10)).grace(Duration.ofMillis(500)))
.aggregate(new Aggregation(),
(k, v, a) -> a, // Disabled the actual aggregation in order to eliminate possiblities of latency
materialized.withLoggingDisabled())
.suppress(untilWindowCloses(Suppressed.BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("delay " + (System.currentTimeMillis() - k.window().endTime().toEpochMilli())));
This way I get a log with the delay every 10 seconds with the difference between the window end and the actual time the peek was called.
I would exect a very small number here, since this code practically does nothing...
Nevertheless, I get delay of 4-20 sec for each key/window.
I use a thread per task (5 threads for this topic).
Can someone please point out if I'm doing anything wrong?
Thanks!
Edit:
Using VirtualVM shows that ~99% of the time consumed over sun.nio.ch.SelectorImpl.select(). This means AFAIU, that the process is "idle" most of the time.
Edit:
It seems that changing "commit.interval.ms" (which was by default 30000) reduced the delay drastically.
Still delay has peaks of event 15 seconds, so the problem isn't solved yet...
I'm trying to use delay and amb to execute a sequence of the same task separated by time.
All I want is for a download attempt to execute some time in the future only if the same task failed before in the past. Here's how I have things set up, but unlike what I'd expect, all three downloads seem to execute without delay.
Observable.amb([
Observable.catch(redditPageStream, Observable.empty()).delay(0 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(30 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(90 * 1000),
# Observable.throw(new Error('Failed to retrieve reddit page content')).delay(10000)
# Observable.create(
# (observer) ->
# throw new Error('Failed to retrieve reddit page content')
# )
]).defaultIfEmpty(Observable.throw(new Error('Failed to retrieve reddit page content')))
full code can be found here. src
I was hoping that the first successful observable would cancel out the ones still in delay.
Thanks for any help.
delay doesn't actually stop the execution of what ever you are doing it just delays when the events are propagated. If you want to delay execution you would need to do something like:
redditPageStream.delaySubscription(1000)
Since your source is producing immediately the above will delay the actual subscription to the underlying stream to effectively delay when it begins producing.
I would suggest though that you use one of the retry operators to handle your retry logic though rather than rolling your own through the amb operator.
redditPageStream.delaySubscription(1000).retry(3);
will give you a constant retry delay however if you want to implement the linear backoff approach you can use the retryWhen() operator instead which will let you apply whatever logic you want to the backoff.
redditPageStream.retryWhen(errors => {
return errors
//Only take 3 errors
.take(3)
//Use timer to implement a linear back off and flatten it
.flatMap((e, i) => Rx.Observable.timer(i * 30 * 1000));
});
Essentially retryWhen will create an Observable of errors, each event that makes it through is treated as a retry attempt. If you error or complete the stream then it will stop retrying.
Before I reload a table I want to know that all of the tasks that were saving in the background are done saving. I have 5 different saveInBackground functions that must complete before I make a new PFQuery. My thought right now is to say:
object.saveInBackgroundWithBlock({ (Success: true, error) -> Void in
count ++
})
Then, once the count gets to a certain number to perform the query. I don't want to use if because if the network is taking too long to process the save request. Would a while statement work? Such as:
while count < 0 {
continue
} else if count == 5 {
self.fetchSecondaryObjects()
}
Ideally I would be using a real-time data engine to power my application but I'm having trouble locating one that I can use as I only write in Swift. If this exists out there and you know of it, I'd love to know about it.
Just do the check each time you increment. It's really very low overhead, only five comparisons.
object.saveInBackgroundWithBlock({ (Success: true, error) -> Void in
count ++;
if count == 5 {
self.fetchSecondaryObjects()
}
})
Unless I'm missing some reason you can't do that, it seems like the simplest solution. Better than running an infinite loop.
Generally I try not to use while in such cases because one can easily get stuck in them if you're somehow of in your logic. Why don't you just use something like that:
if count >= xyz
The first handler listens some channel of messages and if there is an incoming message, it sets interval:
toggleFlagInterval = setInterval (-> toggleFlag), 500
Messages can be arbitrarily much, but I need to set only one interval.
Second handler reads the message and in it I want to remove the interval:
clearInterval toggleFlagInterval
I want to control that was always zero or one interval .
To do this, I need to find all set intervals.
How to find all set intervals using CoffeeScript?
I would be very grateful for your help.
Thanks to all.
That doesn't make sense. You cannot find all functions registered with setInterval, with or without CoffeeScript (that would be a JavaScript question, it has nothing to do with CoffeeScript). You just need to keep track of them yourself.
It seems like in this specific case, you simply need to choose to conditionally not set an interval, if one is already set.
To do so, your setting code would use ?=:
toggleFlagInterval ?= setInterval (-> toggleFlag), 500
And your clearing code would reset toggleFlagInterval to null:
clearInterval toggleFlagInterval
toggleFlagInterval = null
Alternatively, you need to cancel any already set interval at the point when you set a new one:
clearInterval(toggleFlagInterval) if toggleFlagInterval?
toggleFlagInterval = setInterval (-> toggleFlag), 500
I'm new to node.js. I need node.js to query a mongodb every five mins, get specific data, then using socket.io, allow subscribed web clients to access this data. I already have the socket.io part set up and of course mongo, I just need to know how to have node.js run every five minutes then post to socket.io.
What's the best solution for this?
Thanks
var minutes = 5, the_interval = minutes * 60 * 1000;
setInterval(function() {
console.log("I am doing my 5 minutes check");
// do your stuff here
}, the_interval);
Save that code as node_regular_job.js and run it :)
You can use this package
var cron = require('node-cron');
cron.schedule('*/5 * * * *', () => {
console.log('running a task 5 minutes');
});
This is how you should do if you had some async tasks to manage:
(function schedule() {
background.asyncStuff().then(function() {
console.log('Process finished, waiting 5 minutes');
setTimeout(function() {
console.log('Going to restart');
schedule();
}, 1000 * 60 * 5);
}).catch(err => console.error('error in scheduler', err));
})();
You cannot guarantee however when it will start, but at least you will not run multiple time the job at the same time, if your job takes more than 5 minutes to execute.
You may still use setInterval for scheduling an async job, but if you do so, you should at least flag the processed tasks as "being processed", so that if the job is going to be scheduled a second time before the previous finishes, your logic may decide to not process the tasks which are still processed.
#alessioalex has the right answer when controlling a job from the code, but others might stumble over here looking for a CLI solution. You can't beat sloth-cli.
Just run, for example, sloth 5 "npm start" to run npm start every 5 minutes.
This project has an example package.json usage.
there are lots of Schedule package that would help you to do this in node.js . Just choose one of them based on your needs
following are list of packages:
Agenda,
Node-schedule,
Node-cron,
Bree,
Cron,
Bull