How can I suspend a Task from Rest API.
I'm using the following code
RuntimeEngine engine = sessionBean.getEngine(implementationId);
TaskService taskService = engine.getTaskService();
taskService.start(taskId, actorId);
taskService.complete(taskId, actorId, data);
It works fine, now I want to save task Status between start and complete in different moments, but I don't know How to pass the data Map in order to hold the actual State.
taskService.suspend(taskId, actorId);
You can take a look at the implementation of the Save operation in the jbpm console.
That's done via saving the output values of the task as far as remember. By the way, suspend is not the right method to call to save the state, because it means a completely different thing.
You can start looking at here: https://github.com/droolsjbpm/jbpm-console-ng/blob/master/jbpm-console-ng-human-tasks-forms/jbpm-console-ng-human-tasks-forms-client/src/main/java/org/jbpm/console/ng/ht/forms/client/editors/taskform/FormDisplayPresenter.java
and go down to the actual implementation.
Regards
Related
How to add a pre-draw hook to current context workspace.tools change?
I attempted to get there using bpy.types.SpaceView3D.draw_handler_add(...) which as it runs on every draw, checks if workspace.tools changed, and if it changed, run my callback, but my callback wants to add its own SpaceView3D.draw_handler_add and doing it this way adds it a frame-too-late, leaving the view port undrawn until a user event repaints the screen.
I found this post online
https://devtalk.blender.org/t/update-property-when-active-tool-changes/11467/12
summary: maybe there is a mainline callback new
https://developer.blender.org/D10635
AFWS Jan '20
#kaio
This seem like a better solution. It’s kind of a mystery code, cause I
couldn’t figure out where you got that code info ,but then started
looking at the space_toolsystem_common.py file. kaio AFWS Jan '20
Just realized there might be a cleaner way of getting callbacks for
active tools using the msgbus. Since workspace tools aren’t rna
properties themselves, figured it’s possible to monitor the
bpy_prop_collection which changes with the tool.
The handle is the workspace itself, so shouldn’t have to worry about
keeping a reference. The subscription lasts until a new file is
loaded, so add a load_post callback which reapplies it.
Note this doesn’t proactively subscribe to workspaces added
afterwards. Might need a separate callback for that :joy:
import bpy
def rna_callback(workspace):
idname = workspace.tools[-1].idname
print(idname)
def subscribe(workspace):
bpy.msgbus.subscribe_rna(
key=ws.path_resolve("tools", False),
owner=workspace,
args=(workspace,),
notify=rna_callback)
if __name__ == "__main__":
ws = bpy.context.workspace
subscribe(bpy.context.workspace)
# Subscribe to all workspaces: if 0:
for ws in bpy.data.workspaces:
subscribe(bpy.context.workspace)
# Clear all workspace subscriptions if 0:
for ws in bpy.data.workspaces:
bpy.msgbus.clear_by_owner(ws)
I have events (ProductOrderRequested, ProductColorChanged, ProductDelivered...) and I want to build a golden record of my product.
But my goal is to build the golden record step by step : each session of activity will give me an updated state of my product and I need to store each version of the state for tracability purpose
I have a quite simple pipeline (code is better than words) :
events
.apply("SessionWindow", Window.
<KV<String, Event>>into(Sessions.withGapDuration(gapSession)
.triggering(<early and late data trigger>))
.apply("GroupByKey", GroupByKey.create())
.apply("ComputeState", ParDo.of(new StatefulFn()))
My problem is for a given window I have to compute the new state based on :
The previous state (i.e computed state of the previous window)
The events received
I would like to avoid calling an external service to get the previous state but instead get the state of the previous window. Is it something possible ?
In Apache Beam state is always scoped per window (also see this answer). So I can only think of re-windowing into the global window and handle the state there. In this global StatefulFn you can store and handle the prior state(s).
It would then look like this:
events
.apply("SessionWindow", Window.
<KV<String, Event>>into(Sessions.withGapDuration(gapSession)
.triggering(<early and late data trigger>))
.apply("GroupByKey", GroupByKey.create())
.apply("Re-window into Global Window", Window.
<KV<String, Event>>into(new GlobalWindows())
.triggering(<early and late data trigger>))
.apply("ComputeState", ParDo.of(new StatefulFn()))
Please also note that as of now, Apache Beam doesn't support stateful processing for merging windows (see this issue). Therefore, your StatefulFn on a session window basis will not work properly when your triggers emit early or late results of session windows since the state is not merged. This is another reason to work with a non-merging window like the global window.
I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).
I am struggeling with something and I would like to ask you if you could point me in the right direction.
I have four tasks I want to complete, -one after the other.
Fetch html-code from web
Parse this code and save to core data storage
Use this data and batch save to calendar
Upload the parsed data to my own web server.
I have written all the code for this and it executes fine. However, at times it struggles as some of the code is executed before the other has finished.
Example:
func startProcess () {
fetchHTMLFromWeb()
parseHTML()
saveToCalendar()
//Sometimes uploadToWeb() starts before saveToCalendar() is finished
uploadToWeb()
}
I have tried reading up on GCD, but it is a rather complex subject and I am finding it hard to grasp it.
Can you recommend any good readups on this subject?
Thank you very much!
You can use the GCD to execute all your stuffs in the background queue.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
self.startProcess();
});
with that, startProcess will start on the background queue/thread. So you can
In the fetchHtmlFromWeb method just call parseHtml(), when the fetch is ended.
hope it helps.
OK, I have been at this for a while ...
I am trying to track when user changes input languages from Language Bar.
I have a Text Service DLL - modeled from MSDN and WinSDK samples - that registers fine, and I can use the interfaces ITfActiveLanguageProfileNotifySink & ITfLanguageProfileNotifySink and see those events just fine.
I also have finally realized that when I change languages these events occur for the application/process that currently has focus.
What I need to do is to simply have these events able to callback to my own application, when it has the focus. I know I am missing something.
Any help here is appreciated.
Thanks.
I did some double-checking, and you should be able to create a thread manager object without implementing ITextStoreACP so long as you don't call ITfThreadMgr::Activate.
So, the code should look like:
HRESULT hr = CoInitialize(NULL);
if (SUCCEEDED(hr))
{
ITfThreadMgr* pThreadMgr(NULL);
hr = CoCreateInstance(CLSID_TF_ThreadMgr, NULL, CLSCTX_INPROC_SERVER, IID_ITfThreadMgr, (LPVOID*) &pThreadMgr);
if (SUCCEEDED(hr))
{
ITfSource *pSource;
hr = pThreadMgr->QueryInterface(IID_ITfSource, (LPVOID*)&pSource);
if(SUCCEEDED(hr))
{
hr = pSource->AdviseSink(IID_ITfActiveLanguageProfileNotifySink,
(ITfActiveLanguageProfileNotifySink*)this,
&m_dwCookie);
pSource->Release();
}
}
}
Alternatively, you can use ITfLanguageProfileNotifySink - this interface is driven from the ItfInputProcessorProfiles object instead of ItfThreadMgr. There's a sample of how to set it up on the MSDN page for ItfLanguageProfileNotifySink.
For both objects, you need to keep the source object (ITfThreadMgr or ITfInputProcessorProfiles) as well as the sink object (what you implement) alive until your application exits.
Before your application exits, you need to remove the sink from the source object using ITfSource::UnadviseSink, and then release the source object (using Release). (You don't need to keep the ItfSource interface alive for the life of your application, though.)