How can I schedule a child activity to execute *before* the parent is completed? - .net-4.5

I have a WF (4.5) workflow activity that creates a child workflow (evaluating a VisualBasicValue expression). I need the result before I complete the parent workflow.
I add the expression to the metadata like this:
private VisualBasicValue<string> _expression;
protected override void CacheMetadata(NativeActivityMetadata metadata)
{
base.CacheMetadata(metadata);
var visualBasicValue = (VisualBasicValue<string>)(_childActivity.Text.Expression);
var expressionText = visualBasicValue.ExpressionText;
_expression = new VisualBasicValue<string>(expressionText);
metadata.AddChild(_expression);
}
I tried scheduling the activity in the Execute method like this:
protected override void Execute(NativeActivityContext context)
{
context.ScheduleActivity(context, _expression, OnCompleted);
Result.Set(context, _value);
}
With a callback of:
private void OnCompleted(NativeActivityContext context, ActivityInstance completedInstance, string result)
{
_value = result;
}
Unfortunately, the _expression activity is only executed after the parent's execution method returns. Adding it as an implementation child doesn't work (it cannot work as an implementation child, as it is supposed to evaluate an expression that contains variables external to the parent).
Any ideas how to overcome this and execute within the execution context?

In code, as in real life, you can't schedule something to the past (yet :).
ScheduleActivity() will place the activity within an execution queue and execute it as soon as it can. As the parent activity is still running, _expression will only execute after it. Bottom-line, it's an asynchronous call.
If you want to control when _expression is called, just use WorkflowInvoker to execute it, synchronously, whenever you want.
public class MyNativeActivity : NativeActivity
{
private readonly VisualBasicValue<string> _expression;
public MyNativeActivity()
{
// 'expression' construction logic goes here
_expression = new VisualBasicValue<string>("\"Hi!\"");
}
protected override void Execute(NativeActivityContext context)
{
var _value = WorkflowInvoker.Invoke(_expression);
Console.WriteLine("Value returned by '_expression': " + _value);
// use '_value' for something else...
}
}

Took me a few days but I managed to resolve my own issue (without breaking the normal of how WF works).
What I ended up doing is, using reflection, iterated over the child's properties and created a LinkedList of evaluation expressions (using VisualBasicValue) of each of its arguments, in the CacheMetadata method. Then in the execution phase, I scheduled the execution of the first evaluation. In its callback I iterate over the remaining evaluations, scheduling the execution of the next evaluations, adding the result to a dictionary, until its done.
Finally, if there are no more evaluations to schedule, I schedule a final activity that takes the dictionary as its argument, and can do whatever it wants with it. Upon its own, it optionally returns the final result to the container's OutArgument.
What I previously failed to understand, is that even though the scheduling occurs after the instantiating activity's execution, the callback runs before control is returned to the host workflow application, and in that space I could work.

Related

Kafka Streams - Transformers with State in Fields and Task / Threading Model

I have a Transformer with a state store that uses punctuate to operate on said state store.
After a few iterations of punctuate, the operation may have finished, so I'd like to cancel the punctuate -- but only for the Task that has actually finished the operation on the partition's respective state store. The punctuate operations for the Tasks that are not done yet should keep running. To that purpose my transformer keeps a reference to the Cancellable returned by schedule().
As far as I can tell, every Task always gets its own isolated Transformer instance and every Task gets its own isolated scheduled punctuate() within that instance (?)
However, since this is effectively state, but not inside a stateStore, I'm not sure how safe this is. For instance, are there certain scenarios in which one transformer instance might be shared across tasks (and therefore absolutely no state must be kept outside of StateStores)?
public class CoolTransformer implements Transformer {
private KeyValueStore stateStore;
private Cancellable taskPunctuate; // <----- Will this lead to conflicts between tasks?
public void init(ProcessorContext context) {
this.store = context.getStateStore(...);
this.taskPunctuate = context.schedule(Duration.ofMillis(...), PunctuationType.WALL_CLOCK_TIME, this::scheduledOperation);
}
private void scheduledOperation(long l) {
stateStore.get(...)
// do stuff...
if (done) {
this.taskPunctuate.cancel(); // <----- Will this lead to conflicts between tasks?
}
}
public KeyValue transform(key, value) {
// do stuff
stateStore.put(key, value)
}
public void close() {
taskPunctuate.cancel();
}
}
You might be able to look into TransformerSupplier, specifically TransformSupplier#get(), this will ensure that ensure we new transformer will be created for when they should be kept independent. Also the Transformers should not share objects, so be careful of this with your Cancellable taskPunctuate. If either of these cases are violated you should see errors like org.apache.kafka.streams.errors.StreamsException: Current node is unknown, ConcurrentModificationException or InstanceAlreadyExistsException.

opencensus - explicit context management

I am implementing an opencensus tracing in my (asynchronous) JVM app.
However I don't understand how is the context passed.
Sometimes it seems to work fine, sometimes traces from different requests appear nested for no reason.
I also have this warning appearing in the logs along with a stacktrace:
SEVERE: Context was not attached when detaching
How do I explicitly create a root span, and how can I explicitly pass a parent/context to the child spans?
In OpenCensus we have a concept of context independent of the "Span" or "Tags". It represents a Map that is propagated with the request (it is implemented as a thread-local so in sync calls automatically gets propagated). For callbacks/async calls just for propagation (we are using io.grpc.Context as the implementation of the context) use the wrap functions defined here https://github.com/grpc/grpc-java/blob/master/context/src/main/java/io/grpc/Context.java#L589. This will ensure just the context propagation, so entries in the context map will be propagated between different threads.
If you want to start a Span in one thread and end it in a different thread, use the withSpan methods from the tracer https://www.javadoc.io/doc/io.opencensus/opencensus-api/0.17.0 :
class MyClass {
private static Tracer tracer = Tracing.getTracer();
void handleRequest(Executor executor) {
Span span = tracer.spanBuilder("MyRunnableSpan").startSpan();
// do some work before scheduling the async
executor.execute(Context.wrap(tracer.withSpan(span, new Runnable() {
#Override
public void run() {
try {
sendResult();
} finally {
span.end();
}
}
})));
}
}
A bit more information about this here https://github.com/census-instrumentation/opencensus-specs/blob/master/trace/Span.md#span-creation

How to communicate user defined objects and exceptions between Service and UI in JavaFX2?

How to communicate user defined objects and user defined (checked) exceptions between Service and UI in JavaFX2?
The examples only show String being sent in to the Service as a property and array of observable Strings being sent back to the UI.
Properties seem to be defined only for simple types. StringProperty, IntegerProperty, DoubleProperty etc.
Currently I have a user defined object (not a simple type), that I want Task to operate upon and update with the output data it produced. I am sending it through the constructor of Service which passes it on through the constructor of Task. I wondered about the stricture that parameters must be passed in via properties.
Also if an exception is thrown during Task's operation, How would it be passed from Service to the UI? I see only a getException() method, no traditional throw/catch.
Properties http://docs.oracle.com/javafx/2/binding/jfxpub-binding.htm
Service and Task http://docs.oracle.com/javafx/2/threads/jfxpub-threads.htm
Service javadocs http://docs.oracle.com/javafx/2/api/javafx/concurrent/Service.html#getException()
"Because the Task is designed for use with JavaFX GUI applications, it
ensures that every change to its public properties, as well as change
notifications for state, errors, and for event handlers, all occur on
the main JavaFX application thread. Accessing these properties from a
background thread (including the call() method) will result in runtime
exceptions being raised.
It is strongly encouraged that all Tasks be initialized with immutable
state upon which the Task will operate. This should be done by
providing a Task constructor which takes the parameters necessary for
execution of the Task. Immutable state makes it easy and safe to use
from any thread and ensures correctness in the presence of multiple
threads."
But if my UI only touches the object after Task is done, then it should be ok, right?
Service has a signature Service<V> the <V> is a generic type parameter used to specify the type of the return object from the service's supplied task.
Let's say you want to define a service which returns a user defined object of type Foo, then you can do it like this:
class FooGenerator extends Service<Foo> {
protected Task createTask() {
return new Task<Foo>() {
protected Foo call() throws Exception {
return new Foo();
}
};
}
}
To use the service:
FooGenerator fooGenerator = new FooGenerator();
fooGenerator.setOnSucceeded(new EventHandler<WorkerStateEvent>() {
#Override public void handle(WorkerStateEvent t) {
Foo myNewFoo = fooGenerator.getValue();
System.out.println(myNewFoo);
}
});
fooGenerator.start();
If you want to pass an input value into the service each time before you start or restart it, you have to be a little bit more careful. You can add the values you want to input to the service as settable members on the service. These setters can be called from the JavaFX application thread, before the service's start method is invoked. Then, when the service's task is created, pass the parameters through to the service's Task's constructor.
When doing this it is best to make all information passable back and forth between threads immutable. For the example below, a Foo object is passed as an input parameter to the service and a Foo object based on the input received as an output of the service. But the state of Foo itself is only initialized in it's constructor - the instances of Foo are immutable and cannot be changed once created and all of it's member variables are final and cannot change. This makes it much easier to reason about the program, as you never need worry that another thread might overwrite the state concurrently. It seems a little bit complicated, but it does make everything very safe.
class FooModifier extends Service<Foo> {
private Foo foo;
void setFoo(Foo foo) { this.foo = foo; }
#Override protected Task createTask() {
return new FooModifierTask(foo);
}
private class FooModifierTask extends Task<Foo> {
final private Foo fooInput;
FooModifierTask(Foo fooInput) { this.fooInput = fooInput; }
#Override protected Foo call() throws Exception {
Thread.currentThread().sleep(1000);
return new Foo(fooInput);
}
}
}
class Foo {
private final int answer;
Foo() { answer = random.nextInt(100); }
Foo(Foo input) { answer = input.getAnswer() + 42; }
public int getAnswer() { return answer; }
}
There is a further example of providing input to a Service in the Service javadoc.
To return a custom user exception from the service, just throw the user exception during the service's task call handler. For example:
class BadFooGenerator extends Service<Foo> {
#Override protected Task createTask() {
return new Task<Foo>() {
#Override protected Foo call() throws Exception {
Thread.currentThread().sleep(1000);
throw new BadFooException();
}
};
}
}
And the exception can be retrieved like this:
BadFooGenerator badFooGenerator = new BadFooGenerator();
badFooGenerator.setOnFailed(new EventHandler<WorkerStateEvent>() {
#Override public void handle(WorkerStateEvent t) {
Throwable ouch = badFooGenerator.getException();
System.out.println(ouch.getClass().getName() + " -> " + ouch.getMessage());
}
});
badFooGenerator.start();
I created a couple of executable samples you can use to try this out.
Properties seem to be defined only for simple types. StringProperty, IntegerProperty, DoubleProperty etc. Currently I have a user defined object (not a simple type), that I want Task to operate upon and update with the output data it produced
If you want a property that can be used for your own classes try SimpleObjectProperty where T could be Exception, or whatever you need.
Also if an exception is thrown during Task's operation, How would it be passed from Service to the UI?
You could set an EventHandler on the Task#onFailedProperty from the UI with the logic with what to do on failure.
But if my UI only touches the object after Task is done, then it should be ok, right?
If you call it from your UI you are sure to be on the javaFX thread so you will be OK. You can assert that you're on the javaFX thread by calling Platform.isFxApplicationThread().

Can execute question using delegate commands in prism

This seems like a dumb question but I have looked through the docs for prism and searched the internet and can't find an example... Here is the deal.
I am using a DelegateCommand in Prism, it is working fine except when I assign a delegate to the can execute to the CanExecute method. in another view model I have a event that takes a bool that I am publishing too and I can see that the event is firing and that the bool is getting passed to my view model with the command in it no problem but this is what I don't understand... How does can execute know that the state has changed? Here is some code for the example.
from the view models ctor
eventAggregator.GetEvent<NavigationEnabledEvent>().Subscribe(OnNavigationEnabledChange, ThreadOption.UIThread);
NavigateCommand = new DelegateCommand(OnNavigate, () => nextButtonEnabled);
Now - here is the OnNavigationEnableChange event.
private void OnNavigationEnabledChange(bool navigationState)
{
nextButtonEnabled = navigationState;
}
enter code here
Like - I am totally missing something here - how does the command know that nextButtonEnabled is no true?
If someone could point me to a working example that would be awesome.
OK - thanks!
This is why I don't use the implementation of DelegateCommand in Prism. I've always hated the callback-based approach for enabling/disabling commands. It's entirely unnecessary, and as far as I can tell, its only (and rather doubtful) 'benefit' is that it's consistent with how execution itself is handled. But that has always seemed pointless to me because execution and enabling/disabling are clearly very different: a button knows when it wants to execute a command but doesn't know when the command's status might have changed.
So I always end up writing something like this:
public class RelayCommand : ICommand
{
private bool _isEnabled;
private Action _onExecute;
public RelayCommand(Action executeHandler)
{
_isEnabled = true;
_onExecute = executeHandler;
}
public bool IsEnabled
{
get { return _isEnabled; }
set
{
_isEnabled = value;
if (CanExecuteChanged != null)
{
CanExecuteChanged(this, EventArgs.Empty);
}
}
}
public bool CanExecute(object parameter)
{
return _isEnabled;
}
public event EventHandler CanExecuteChanged;
public void Execute(object parameter)
{
_onExecute();
}
}
(If necessary you could modify this to use weak references to execute change event handlers, like Prism does.)
But to answer your question: how is the callback approach even meant to work? Prism's DelegateCommand offers a RaiseCanExecuteChanged method you can invoke to ask it to raise the event that'll cause command invokers to query your command's CanExecute. Given that you have to tell the DelegateCommand any time your enabled status changes, I don't see any meaningful benefit of a callback-based approach. (Sometimes you see a broadcast model though - arranging so that any change in status anywhere notifies all command invokers! In that case, a callback is useful because it means it doesn't matter if you don't know what actually changed. But requerying every single command seems unpleasant to me.)
Answering your question how does the command know that it is now enabled:
NavigateCommand = new DelegateCommand(OnNavigate, () => nextButtonEnabled);
This overload of the DelegateCommand constructor takes 2 parameters:
The first is the command action and the second is the CanExecute delegate that returns bool.
in your example your CanExecute action always returns nextButtonEnabled
eventAggregator.GetEvent<NavigationEnabledEvent>().Subscribe(OnNavigationEnabledChange, ThreadOption.UIThread);
triggers OnNavigationEnabledChange that changes nextButtonEnabled
this is how it works...

What's the best way to get a return value out of an asyncExec in Eclipse?

I am writing Eclipse plugins, and frequently have a situation where a running Job needs to pause for a short while, run something asynchronously on the UI thread, and resume.
So my code usually looks something like:
Display display = Display.getDefault();
display.syncExec(new Runnable() {
public void run() {
// Do some calculation
// How do I return a value from here?
}
});
// I want to be able to use the calculation result here!
One way to do it is to have the entire Job class have some field. Another is to use a customized class (rather than anonymous for this and use its resulting data field, etc.
What's the best and most elegant approach?
I think the Container above is the "right" choice. It could be also be genericized for type safety. The quick choice in this kind of situation is the final array idiom. The trick is that a any local variables referenced from the Runnable must be final, and thus can't be modified. So instead, you use a single element array, where the array is final, but the element of the array can be modified:
final Object[] result = new Object[1];
Display display = Display.getDefault();
display.syncExec(new Runnable()
{
public void run()
{
result[0] = "foo";
}
}
System.out.println(result[0]);
Again, this is the "quick" solution for those cases where you have an anonymous class and you want to give it a place to stick a result without defining a specific Container class.
UPDATE
After I thought about this a bit, I realized this works fine for listener and visitor type usage where the callback is in the same thread. In this case, however, the Runnable executes in a different thread so you're not guaranteed to actually see the result after syncExec returns. The correct solution is to use an AtomicReference:
final AtomicReference<Object> result = new AtomicReference<Object>();
Display display = Display.getDefault();
display.syncExec(new Runnable()
{
public void run()
{
result.set("foo");
}
}
System.out.println(result.get());
Changes to the value of AtomicReference are guaranteed to be visible by all threads, just as if it were declared volatile. This is described in detail here.
You probably shouldn't be assuming that the async Runnable will have finished by the time the asyncExec call returns.
In which case, you're looking at pushing the result out into listeners/callbacks (possibly Command pattern), or if you do want to have the result available at a later in the same method, using something like a java.util.concurrent.Future.
Well, if it's sync you can just have a value holder of some kind external to the run() method.
The classic is:
final Container container = new Container();
Display display = Display.getDefault();
display.syncExec(new Runnable()
{
public void run()
{
container.setValue("foo");
}
}
System.out.println(container.getValue());
Where container is just:
public class Container {
private Object value;
public Object getValue() {
return value;
}
public void setValue(Object o) {
value = o;
}
}
This is of course hilarious and dodgy (even more dodgy is creating a new List and then setting and getting the 1st element) but the syncExec method blocks so nothing bad comes of it.
Except when someone comes back later and makes it asyncExec()..