How to prevent Gtk g_source_timeout_add from triggering in multiple instances - gtk

I register a timeout with:
timeout_tag = g_timeout_add(250, update_time, NULL);
and destroys it with
g_source_remove(timeout_tag);
But as I open multiple instances of the same app, the timeout triggers update_time in all instances instead of just one. How would I isolate them?
I'm creating a new application with
app = gtk_application_new("com.lunacd.reminder", G_APPLICATION_FLAGS_NONE);
Should I generate a uuid and append it to com.lunacd.reminder so that the identifier remains distinct?

As #jcoppens pointed out, timeouts are triggered independently. This question is invalid.

Related

Multiple services using Firebase job dispatcher in android

For an android app, i want start three services using firebase job dispatcher on different days. Can I use same FirebaseDispatcher object for all the three jobs?. Meanwhile, How to maintain same FirebaseDispatcher object even though app is closed?. If i use static object it will be cleared if app crashes. So how to maintain my FirebaseDispatcher object for scheduling multiple services using same dispatcher object? or Can I create different FirebaseDispatcher object for different services? Is it good practice?
You can use the same FirebaseJobDispatcher for all Jobs.
When the application is closed, your Jobs will be executed. How? This is the Android OS concern.
You simply describe when and how your Jobs will be launched. It's all. Read the documentation again and see the comments in the source code of the library.
You can create the new FirebaseJobDispatcher for different Jobs.
Misapplication of static objects -- is bad practice.
So
FirebaseJobDispatcher dispatcher1 =
new FirebaseJobDispatcher(new GooglePlayDriver(context));
Job job1 = dispatcher1.newJobBuilder()
.setService(YourService1.class)
.setTag(Const.JOB_TAG_1)
// more options to run
.build();
Job job2 = dispatcher1.newJobBuilder()
.setService(YourService2.class)
.setTag(Const.JOB_TAG_2)
// ...
.build();
dispatcher1.mustSchedule(job1);
FirebaseJobDispatcher dispatcher2 =
new FirebaseJobDispatcher(new GooglePlayDriver(context));
dispatcher2.mustSchedule(job2);
dispatcher1.cancel(Const.JOB_TAG_1);
dispatcher2.cancel(Const.JOB_TAG_2);

javax.jcr.InvalidItemStateException: Item cannot be saved

I am getting following exception in a single box cq5 author environment.
javax.jcr.InvalidItemStateException: Item cannot be saved
because node property has been modified externally
more exception details:
Caused by: javax.jcr.InvalidItemStateException: Unable to update a stale item: item.save()
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:262)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
at org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:65)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:361)
at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:812)
at com.day.crx.core.CRXSessionImpl.save(CRXSessionImpl.java:142)
at org.apache.sling.jcr.resource.internal.helper.jcr.JcrResourceProvider.commit(JcrResourceProvider.java:511)
... 215 more
Caused by: org.apache.jackrabbit.core.state.StaleItemStateException: 3bec1cb7-9276-4bed-a24e-0f41bb3cf5b7/{}ssn has been modified externally
at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:679)
at org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:1507)
at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1537)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:400)
at org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:375)
at org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:275)
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
Here is the code sample:
adminResourceResolver = resourceResolverFactory
.getAdministrativeResourceResolver(null);
Resource fundPageResource = adminResourceResolver.getResource(page
.getPath() + "/jcr:content");
ModifiableValueMap homePageResourceProperties = fundPageResource
.adaptTo(ModifiableValueMap.class);
homePageResourceProperties.put("ssn",(person.getSsn());
adminResourceResolver.commit();
Any ideas ? It could be possible multiple threads accessing this code, as multiple authors on multiple pages calling this code from a authored component.
Thank you,
Sri
This is an error your see often in CQ5.5 (and lessens with each version upwards). The root cause of this issue is that multiple processes/services are modifying the same resource in roughly the same timespan (usually using different sessions, sometimes even with different users).
A small example to demonstrate perhaps. Session A and B both have a reference to Resource X. Session A modifies some properties on X, saves and commits, and is destroyed. This all goes smoothly. Session B still has a snapshot of the situation before A made modifications, session B makes modifications and all seems well UNTIL it tries to save. At this point, session B detects that it can't commit its changes because it doesn't have the latest node state. It has detected some other sessions made changes to the same node. In essence the current node state conflicts with modifications that session A has done and throws an ItemStale exception. The reason for this exception is the notion that the API doesn't know wether you want to keep the changes made by A, keep the changes made by the current session and discard the changes made by A, or merge them.
This error happens often with long running sessions and with workflow/listener combinations. Therefore the recommendation is to keep sessions as short as possible to prevent this kind of conflicts as much as possible.
One way to deal with this is to call session.refresh(keepChangesBoolean) before calling .save(). This instructs the current session to check for updates made by other sessions and deal with it according to the boolean flag you submit. This however is not a guarantee as it's still possible that between your refresh and your save call, yet another session has done the same. It only lowers the odds of this exception occurring.
Another way to deal with this is to retry again from scratch.

Moving from file-based tracing session to real time session

I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).

WF4 InstancePersistenceCommand interrupted

I have a windows service, running workflows. The workflows are XAMLs loaded from database (users can define their own workflows using a rehosted designer). It is configured with one instance of the SQLWorkflowInstanceStore, to persist workflows when becoming idle. (It's basically derived from the example code in \ControllingWorkflowApplications from Microsoft's WCF/WF samples).
But sometimes I get an error like below:
System.Runtime.DurableInstancing.InstanceOwnerException: The execution of an InstancePersistenceCommand was interrupted because the instance owner registration for owner ID 'a426269a-be53-44e1-8580-4d0c396842e8' has become invalid. This error indicates that the in-memory copy of all instances locked by this owner have become stale and should be discarded, along with the InstanceHandles. Typically, this error is best handled by restarting the host.
I've been trying to find the cause, but it is hard to reproduce in development, on production servers however, I get it once in a while. One hint I found : when I look at the LockOwnersTable, I find the LockOnwersTable lockexpiration is set to 01/01/2000 0:0:0 and it's not getting updated anymore, while under normal circumstances the should be updated every x seconds according to the Host Lock Renewal period...
So , why whould SQLWorkflowInstanceStore stop renewing this LockExpiration and how can I detect the cause of it?
This happens because there are procedures running in the background and trying to extend the lock of the instance store every 30 seconds, and it seems that once the connection fail connecting to the SQL service it will mark this instance store as invalid.
you can see the same behaviour if you delete the instance store record from [LockOwnersTable] table.
The proposed solution is when this exception fires, you need to free the old instance store and initialize a new one
public class WorkflowInstanceStore : IWorkflowInstanceStore, IDisposable
{
public WorkflowInstanceStore(string connectionString)
{
_instanceStore = new SqlWorkflowInstanceStore(connectionString);
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
InstanceView view = _instanceStore.Execute(handle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
_instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
public InstanceStore Store
{
get { return _instanceStore; }
}
public void Dispose()
{
if (null != _instanceStore)
{
var deleteOwner = new DeleteWorkflowOwnerCommand();
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
_instanceStore.Execute(handle, deleteOwner, TimeSpan.FromSeconds(10));
handle.Free();
}
}
private InstanceStore _instanceStore;
}
you can find the best practices to create instance store handle in this link
Workflow Instance Store Best practices
This is an old thread but I just stumbled on the same issue.
Damir's Corner suggests to check if the instance handle is still valid before calling the instance store. I hereby quote the whole post:
Certain aspects of Workflow Foundation are still poorly documented; the persistence framework being one of them. The following snippet is typically used for setting up the instance store:
var instanceStore = new SqlWorkflowInstanceStore(connectionString);
instanceStore.HostLockRenewalPeriod = TimeSpan.FromSeconds(30);
var instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
It's difficult to find a detailed explanation of what all of this
does; and to be honest, usually it's not necessary. At least not,
until you start encountering problems, such as InstanceOwnerException:
The execution of an InstancePersistenceCommand was interrupted because
the instance owner registration for owner ID
'9938cd6d-a9cb-49ad-a492-7c087dcc93af' has become invalid. This error
indicates that the in-memory copy of all instances locked by this
owner have become stale and should be discarded, along with the
InstanceHandles. Typically, this error is best handled by restarting
the host.
The error is closely related to the HostLockRenewalPeriod property
which defines how long obtained instance handle is valid without being
renewed. If you try monitoring the database while an instance store
with a valid instance handle is instantiated, you will notice
[System.Activities.DurableInstancing].[ExtendLock] being called
periodically. This stored procedure is responsible for renewing the
handle. If for some reason it fails to be called within the specified
HostLockRenewalPeriod, the above mentioned exception will be thrown
when attempting to persist a workflow. A typical reason for this would
be temporarily inaccessible database due to maintenance or networking
problems. It's not something that happens often, but it's bound to
happen if you have a long living instance store, e.g. in a constantly
running workflow host, such as a Windows service.
Fortunately it's not all that difficult to fix the problem, once you
know the cause of it. Before using the instance store you should
always check, if the handle is still valid; and renew it, if it's not:
if (!instanceHandle.IsValid)
{
instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
It's definitely less invasive than the restart of the host, suggested
by the error message.
you have to be sure about expiration of owner user
here how I am used to handle this issue
public SqlWorkflowInstanceStore SetupSqlpersistenceStore()
{
SqlWorkflowInstanceStore sqlWFInstanceStore = new SqlWorkflowInstanceStore(ConfigurationManager.ConnectionStrings["DB_WWFConnectionString"].ConnectionString);
sqlWFInstanceStore.InstanceCompletionAction = InstanceCompletionAction.DeleteAll;
InstanceHandle handle = sqlWFInstanceStore.CreateInstanceHandle();
InstanceView view = sqlWFInstanceStore.Execute(handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
sqlWFInstanceStore.DefaultInstanceOwner = view.InstanceOwner;
return sqlWFInstanceStore;
}
and here how you can use this method
wfApp.InstanceStore = SetupSqlpersistenceStore();
wish this help

MessageReadPropertyFilter getting reset when using MSMQ

Strange one. We have a multi-threaded app which pulls messages off a MSMQ Queue and then subsequently performs actions based on the messages. All of this is done using DTC.
Sometimes, for some reason I can't describe, we get message read errors when pulling Messages off the queue.
The code that is being used in the app:
Message[] allMessagesOnQueue = this.messageQueue.GetAllMessages();
foreach (Message currentMessage in allMessagesOnQueue)
{
if ((currentMessage.Body is IAMessageIDealWith))
{
// do something;
}
}
When the currentMessage.Body is accessed, at times it throws an exception:
System.InvalidOperationException: Property Body was not retrieved when receiving the message. Ensure that the PropertyFilter is set correctly.
Now - this only happens some of the time - and it appears as though the MessageReadPropertyFilter on the queue has the Body property set to false.
As to how it gets like this is a bit of a mystery. The Body property is one of the defaults and we absolutley never explicitly set it to false.
Has anyone else seen this kind of behaivour or has some idea why this value is getting set to be false?
As alluded to earlier, you could explicitly set the boolean values on the System.Messaging.MessagePropertyFilter object that is accessible on your messageQueue object via the MessageReadPropertyFilter property.
If you want all data to be extracted from a message when received or peaked, use:
this.messageQueue.MessageReadPropertyFilter.SetAll(); // add this line
Message[] allMessagesOnQueue = this.messageQueue.GetAllMessages();
// ...
That may hurt performance of reading many messages, so if you want just a few additional properties, create a new MessagePropertyFilter with custom flags:
// Specify to retrieve selected properties.
MessagePropertyFilter filter= new MessagePropertyFilter();
filter.ClearAll();
filter.Body = true;
filter.Priority = true;
this.messageQueue.MessageReadPropertyFilter = filter;
Message[] allMessagesOnQueue = this.messageQueue.GetAllMessages();
// ...
You can also set it back to default using:
this.messageQueue.MessageReadPropertyFilter.SetDefaults();
More info here: http://msdn.microsoft.com/en-us/library/system.messaging.messagequeue.messagereadpropertyfilter.aspx
I have seen it as well, and have tried initializing it with the properties I'm accessing explicitly set, and not setting them anywhere else. I periodically get the same error you are getting, my app is multi-threaded as well, what I ended up doing is trapping that error and reconnecting to MSMQ when I get it.
Sometimes, for some reason I can't describe, we get message read errors when pulling Messages off the queue.
Are you using the same MessageQueue instance from more than one thread, without locking? In that case, you will encounter spurious changes in MessageReadPropertyFilter - at least I did, when I tried.
Why? Because
Only the GetAllMessages method is thread safe.
What can you do? Either
wrap a lock (_messageQueue) around all access to your messageQueue OR
create multiple MessageQueue instances, one per thread