I'm trying to set a user idle timeout here. Everything seems to work...except the clearTimeout function. Events work, setTimeout works but no matter what I do, as soon as I set it up first time, can't stop it. Method gets called from the onBeforeRendering function of my main controller. No visible error from the debugger. Any help?
setTimeOut: function () {
var self = this;
var timeOut = function userTimeout() {
jQuery.sap.log.error("TIMEOUT");
try {
if (self.getModel("Global").getProperty("/RecordUnlocked") === true) {
self._unlockRecord();
}
} catch(e) {
jQuery.sap.log.error("TIMEOUT");
};
try {
var navHistory = self.getView().getModel("Global").getProperty("/NavHistory");
history.go(navHistory);
} catch(e) {
jQuery.sap.log.error("TIMEOUT");
}
/* MessageBox.show(self.getModel("i18n").getResourceBundle().getText("timeOut"), {
onClose: function(oAction) {*/
};
function reset() {
clearTimeout(timeOut);
setTimeout(timeOut, 20000);
}
document.onmousemove = reset;
document.onkeypress = reset;
}
window.clearTimeout clears the returning value of window.setTimeout, not the function you're executing in the timeout itself.
The timeout variable you have there is not actually the result of the setTimeout function but a function you defined.
Usually, it's something like
var myTimeoutFunction = _ => console.log('hi');
var myTimeout = window.setTimeout(myTimeoutFunction, 20000);
window.clearTimeout(myTimeout);
As a word of warning, your own function is also called setTimeout. Which one is executed in the callback of the mouse- and key events depends on the current context. I guess you're lucky here because the event will run in the window context but it could be confusing if you ever bind the function or something
Related
The following Trigger is firing twice:
trigger AccountTrigger on Account ( before insert, after insert, before update, after update, before delete, after delete) {
AccountTriggerHandler handle = new AccountTriggerHandler(trigger.new, trigger.oldMap);
System.debug('AccountTrigger created a handler instance: ' + handle);
// Currently the Trigger is firing twice with no obvious reason.
if (Trigger.isBefore) {
if (Trigger.isInsert) {
handle.beforeInsert();
}
if (Trigger.isUpdate) {
// Call handler here!
}
if (Trigger.isDelete) {
// Call handler here!
}
}
if (Trigger.isAfter) {
if (Trigger.isInsert) {
// Call handler here!
}
if (Trigger.isUpdate) {
// Call handler here!
}
if (Trigger.isDelete) {
// Call handler here!
}
}
}
The debug result is showing two handler instances. The weird thing is: The first one seems to be empty? How can that be?
EDIT 1:
The Testcode:
#isTest
public class AccountTestTest {
#isTest
public static void testAccountInsert() {
// Insert an Account
Account a = new Account(name='TestCustomer');
insert a;
Account queryAccount = [SELECT Account.id, Account.name FROM Account WHERE Id = :a.Id];
System.debug('TEST RESULT: ' + queryAccount);
System.debug('AccountTestTest completed.');
// Actually test something...
}
}
I know it's missing asserts, but for the sake of simplicity, I just tried this one.
It's because of ”before insert”. At that stage ids haven't been generated yet. If you don't have any logic that fits into before insert best (complex validations? Field prepopulation?) remove that event?
Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}
I'm wrapping an API that emits events in Observables and currently my datasource code looks something like this, with db.getEventEmitter() returning an EventEmitter.
const Datasource = {
getSomeData() {
return Observable.fromEvent(db.getEventEmitter(), 'value');
}
};
However, to actually use this, I need to both memoize the function and have it return a ReplaySubject, otherwise each subsequent call to getSomeData() would reinitialize the entire sequence and recreate more event emitters or not have any data until the next update, which is undesirable, so my code looks a lot more like this for every function
const someDataCache = null;
const Datasource = {
getSomeData() {
if (someDataCache) { return someDataCache; }
const subject = new ReplaySubject(1);
Observable.fromEvent(db.getEventEmitter(), 'value').subscribe(subject);
someDataCache = subject;
return subject;
}
};
which ends up being quite a lot of boilerplate for just one single function, and becomes more of an issue when there are more parameters.
Is there a better/more elegant design pattern to accomplish this? Basically, I'd like that
Only one event emitter is created.
Callers who call the datasource later get the most recent result.
The event emitters are created when they're needed.
but right now I feel like this pattern is fighting the Observable pattern, resulting a bunch of boilerplate.
As a followup to this question, I ended up commonizing the logic to leverage Observables in this way. publishReplay as cartant mentioned does get me most of the way to what I needed. I've documented what I've learned in this post, with the following tl;dr code:
let first = true
Rx.Observable.create(
observer => {
const callback = data => {
first = false
observer.next(data)
}
const event = first ? 'value' : 'child_changed'
db.ref(path).on(event, callback, error => observer.error(error))
return {event, callback}
},
(handler, {event, callback}) => {
db.ref(path).off(event, callback)
},
)
.map(snapshot => snapshot.val())
.publishReplay(1)
.refCount()
As it is written in Protractor ControlFlow documentation - WebDriver async calls are automatically stored in Control Flow and will be executed in the defined order. In reality it seems that such approach is just a syntax sugar to avoid explicitly written "then" chains. But when I need to put my async function into Control Flow queue explicitly? Imagine that I have a pieced of code:
myAsync(xxx).then(function() {
doSomething();
return;
});
and this code is in the middle of Protractor/Jasmine test so there are asserts above it and below; Should I do something like:
flow.execute(myAsync);
and if yes where I must put my "then" section in this case?
it('blah', function() {
browser.get('something');
expect(element('foo').getText()).toBe('bar');
var myAsync = function() {
// if your async function doesn't return a promise, make it one
var deferred = protractor.promise.defer()
// do some async stuff in here and then reject or fulfill with...
if (error) {
deferred.reject(error)
else {
deferred.fulfill(value);
}
return deferred.promise;
};
// hook into the controlFlow and execute the async thing so you can check after
browser.controlFlow().execute(myAsync);
expect(element('foo').getText()).toBe('baz');
// or check the return value of the promise
browser.controlFlow().execute(myAsync).then(function(result) {
expect(result).toBe('something');
});
I have a node-apn nodejs script running as a daemon on AmazonWS. The daemon runs fine and the script stays up and comes back when it goes down but I believe I am having a synchronous execution and exiting issue with node.js. When I release the process with process.exit(); even though all console.logs output saying they have sent my messages, they never are received on the phone. I decided to remove the exit and let the process "hang" after execution and all messages were sent successfully. This led me to do the following implementation using an ASYNC function, but the same result seems to be happening. Can anyone provide insight to this? There are no errors being thrown from APN or anywhere else.
function closeDB()
{
connection.end(function(err) {
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
process.exit(1);
}
console.log("APNS-PUSH: COMPLETED.");
});
setTimeout(function(){process.exit();}, 50);
} // End of closeDB()
function apnsError(err, notification)
{
console.log(err);
console.log(notification);
closeDB();
}
function async(arg, callback)
{
apnsConnection.sendNotification(arg);
console.log(arg);
setTimeout(function() { callback(1); }, 100);
}
/**
* Our MySQL query callback.
*/
function queryCB(err, results)
{
//error in our all, report and exit
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
closeDB();
}
if(results.length == 0)
{
closeDB();
}
var notes = [];
var count = 0;
try {
for( var i = 0; i < results.length; i++ ) {
var myDevice = new apns.Device(results[i]['udid']);
var note = new apns.Notification();
note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.
note.badge = results[i]["notification_count"];
note.sound = "ping.aiff";
note.alert = results[i]["message"];
note.device = myDevice;
connection.query('UPDATE `tbl_notifications` SET `sent`=1 WHERE `id`=' + results[i]["id"] , function(err, results) {
if(err)
{
console.log("ERROR: " + util.inspect(err, false, 5));
}
});
notes.push(note);
}
} catch( err ) {
console.log('error: ' + err)
}
console.log(notes.length);
notes.forEach(function(nNode) {
async(nNode, function(result) {
count++;
if(count == notes.length) {
closeDB();
}
})
});
} // End of queryCB()
I had the same problem where killing the process also killed the open socket connections and didn't allow the notifications to be sent. The solution I came up with isn't an an ideal solution but it will work in your situation as well. I looked into the node-apn code and found that the Connection object inherited from EventEmitter so you can monitor events on the object like so:
var apnsConnection = new apn.Connection(options)
apnsConnection.sendNotification(notification)
apnsConnection.on('transmitted', function(){
console.log("Transmitted")
callback()
})
apnsConnection.on('error', function(){
console.log("Error")
callback()
})
This is monitoring the socket that the notification is sent through so I don't know how accurate it is at determining when a notification has successfully been passed off to Apple's APNS servers but it has worked pretty well for me.
The reason you are seeing this problem is that when you use #pushNotification it buffers the notification inside the module and handles sending it asynchronously.
Listening for "transmitted" is valid and this is emitted when the notification has been written to the socket. However, if your objective is to close the socket after all notifications have been sent then the easiest way to accomplish this is using the connectionTimeout property when creating your connection.
Simply set connectionTimeout to something around 1000 (milliseconds) and assuming you have no other connections open then the process will exit automatically. Or you can set an event listener on the timeout event and call process.exit() from there.