how the callback function be triggered in the keras.callbacks.Callback class? - callback

This a code sample as follows, the LoggingReporter class redefine the callback of the keras. As the model is trained, the callback function is runned. But i want to know where the function 'on_train_begin' triggered.
class LoggingReporter(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
...
def on_epoch_begin(self, epoch, logs={}):
...
reporter = LoggingReporter()
r = model.fit(..., callbacks = [reporter,])

It's triggered in the training files which depends on whether you use .fit or .fit_generator. Here is the location for on_epoch_begin for `.fit' method.

Related

Using a Simulink SimulationInput Object as a property of a class

I want to set up an object of a class, which contians all variables for a simulink model as its properties.
To later be able to use parallel simulations, I want to have a function in the methods, which takes all the properties and feeds them to a Simulink SimulationInput Object (via the 'setVariable' method),
but the setVariable-function does not fill up the variables of the object 'in' when I run it.
The code looks somewhat like this:
classdef SimSetup
properties
mdl = 'SimulinkFile'
SimulationTime = 2
Plant
in
end
methods
function this = SimSetup()
open_system(this.mdl);
this.Plant = load('PlantData.mat','xPlant');
end
function createSimIn(this)
this.in = Simulink.SimulationInput(this.mdl);
this.in = this.in.setVariable('SimulationTime', this.SimulationTime);
end
end
Thanks in advance
The mistake was the following:
I either had to use a handle class or had to return the output of the function to the object itself (in short: function this = createSimIn(this)).

Scala remote method injection

I have a requirement, I have a method, which takes another function. I want to get the result of this parameter function in my caller.
I have created a code snippet for this scenario:
class ProcessHandler {
def executeInstructions(x:String=>Array[String])
{
//print the resultant list here
}
}
object ProcessHandlerMain {
def main(args: Array[String]) {
val handler = new ProcessHandler
handler.executeInstructions( instruction)
}
def instruction(x:String):Array[String] =
{
List("words", "from", "book").toArray
}
}
This call handler.executeInstructions( instruction) will be made by another process. Here I am using main method to test it out.
Here is what I do not know:
How can I print the result of parameter function
If I have to expose this method executeInstructions() to client side, what is the best approach? Here we will not be passing text instructions, rather a function similar to instruction().
Thanks
Updated: From the response received, my code is updated to
class ProcessHandler {
def executeInstructions(x:String=>Array[String])
{
//print the resultant list here
val result = x("some string here")
//this array will be sent to another service
// dispatcher.dispatch(result)
}
}
My requirements are updated:
How can I print the result of parameter function : Done
If I have to expose this method executeInstructions() to client side, what is the best approach? Here we will not be passing text instructions, rather a function similar to instruction(). Pending
The client interface has will work as below:
1. client will be calling executeInstructions() remotely and pass function as parameter.
2. InstructionId will be passed to the injected function, the resultant array will be dispatched to another service.

Activiti Java Service Task: Passivate w/out the need for receive task

this has already been answered but the solutions have not been working out for me.
Activiti asynchronous behaviour is fairly simple and only allows the user to enable a flag which tells activiti engine to insert such task in a execution queue (managing a pool of threads).
What i want is not to insert my java service task in a pool but to passivate its behaviour and only complete such task when an external signal is received and/or a callback is called.
My attempt:
class customAsyncTask extends TaskActivityBehavior {
override def execute(execution: ActivityExecution): Unit = {
val future = Future {
println(s"Executing customAsyncTask -> ${execution.getCurrentActivityName}, ${cur}")
}
future.onComplete {
case Success(result) => leave(execution)
case _ => // whatever
}
}
def signal(processInstanceId : String, transition : String) = {
val commandExecutor = main.processEngine.getProcessEngineConfiguration.asInstanceOf[ProcessEngineConfigurationImpl].getCommandExecutor
val command = new customSignal(processInstanceId, transition)
commandExecutor.execute(command)
}
}
On my previous code sample i have registered a scala future callback which when called will terminate the current activity and move to the next.
I also have a signal method which builds a custom signal that based on the processId and a name will call execution.take with the appropriate transition.
On both cases i am getting the following error (the bottom stack changes a little)
java.lang.NullPointerException
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.performOperationSync(ExecutionEntity.java:636)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.performOperation(ExecutionEntity.java:629)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.take(ExecutionEntity.java:453)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.take(ExecutionEntity.java:431)
at org.activiti.engine.impl.bpmn.behavior.BpmnActivityBehavior.performOutgoingBehavior(BpmnActivityBehavior.java:140)
at org.activiti.engine.impl.bpmn.behavior.BpmnActivityBehavior.performDefaultOutgoingBehavior(BpmnActivityBehavior.java:66)
at org.activiti.engine.impl.bpmn.behavior.FlowNodeActivityBehavior.leave(FlowNodeActivityBehavior.java:44)
at org.activiti.engine.impl.bpmn.behavior.AbstractBpmnActivityBehavior.leave(AbstractBpmnActivityBehavior.java:47)
Unfortunately, it is highly likely that the engine is erasing the information concerning the execution when the execute method returns, even though no complete/leave/take has been called. Even though my callback has the execution object in context, when i query for information using its proccess ID all i receive is null.
So, what i am doing wrong here? How can i achieve the behaviour that i want?
I dont see anything specific, I would have said you need to extend a class that implements SignalableActivityBehavior, but I think TaskActivityBehavior actually does this.
While the stack indicates the NPE is coming from the leave(), I am confused why leave is calling "take" since take is a transition event and really should only happen on a task labeled as synchronous.
All I can offer is, Camunda have an example implementation that is similar to your scenario. You may be able to use this to help you:
https://github.com/camunda/camunda-bpm-examples/tree/master/servicetask/service-invocation-asynchronous
It seems that activiti uses thread local variables which means that when calling methods from the scala threads (scala Executor Context) would be pointless since they do not share the context.
To solve all i have to do from my callback is make a signal call much like if i were calling from a remote system. The only difference is that i do not need to save my process instance identifier.
The code looks as such:
class AsynchronousServiceTask extends AbstractBpmnActivityBehavior {
val exec_id : String = "executionId"
override def execute(execution : ActivityExecution) = {
val future = Future { println("Something") }
future onComplete {
case _ => myobject.callSignalForMe(execution.getId)
}
}
override def signal(execution : ActivityExecution, signalName : String, signalData : AnyRef) = {
println("Signal called, leaving current activity..")
leave(execution)
}
}
Basically, myobject holds the runTimeEngine and will inject the signal in a ThreadLocal context. All clean and working as intended.

How to write new widgets implementing MVC with lablgtk2?

I am writing a family of new widgets for lablgtk2, the OCaml bindings for Gtk+. Some of these widgets can edit or present a fairly complex information, I am therefore interested in using model-view-controler or subject-observer, similar to what can be found in the GTree module.
This module defines a GTree.model and a GTree.view class, each having signals which can be connected to, and a GTree.model can be attached to one or more GTree.view's.
Imitating this organisation in pure OCaml is not that trivial, because the code available in the library is a binding of the C-library. I need to go through the following steps:
Defining new widgets
Defining new signals
Triggering these new signals
Defining new models
I could go through 1 and 2 but I am not sure how to do 3 and 4. How to do these right?
Defining new widgets
The definition of new widgets itself is not problematic. The new widget is typically a specialised version of the Gnome canvas or a composite. In the former case, our new widget can inherit from the Gnome canvas as a GObj.widget and in the latter case, we can use the GObj.widget provided by the container used to hold the composite. This typically looks like
class view () =
let vbox = GPack.vbox () in
…
object(self)
inherit GObj.widget vbox#as_widget
…
end
Defining new signals
The bindings give plenty of examples for code defining new signals, so that we can define new signals for our widgets, as illustrated by the following snippet, considering the simple case of signals without parameters:
open GtkSignal
module Event =
struct
let plop : ([>`widget], unit -> unit) t = {
name = "plop_event";
classe = `widget;
marshaller = marshal_unit;
}
let fizz : ([>`widget], unit -> unit) t = {
name = "fizz_event";
classe = `widget;
marshaller = marshal_unit;
}
end
class pill_signals obj =
object (self)
inherit ['a] GObj.gobject_signals (obj :> Gtk.widget Gobject.obj)
method plop = self#connect Event.plop
method fizz = self#connect Event.fizz
end
With these definitions, our view widget can expose these signals by defining an appropriate connect method:
method connect =
new pill_signals obj
Triggering the new signals
It seems that the function GtkSignal.emit serves the purpose of emitting a signal to an object, triggering the registered callbacks. This functions as the following signature:
val emit :
'a Gobject.obj ->
sgn:('a, 'b) GtkSignal.t ->
emitter:(cont:('c Gobject.data_set array -> 'd) -> 'b) ->
conv:(Gobject.g_value -> 'd) -> 'b
The first two parameters are self-explaining, but it is not that clear, what the two remaining ones are. Unfortunately, there is no use example in lablgtk source code, as signals are emitted from the C-side of the code. These two arguments seems to be related with the preparation of the arguments of the signal, materialised as a 'c Gobject.data_set array and the retrieval of the yielded value with the argument labeled ~conv. Nevertheless, the role of the ~cont-argument in the emitter still has to be cleared.
Defining the new model
The tricky part in the definition of the model, is that it should inherit from GObj.object in order to be able to send an receive signals. Unfortunately, there is no function allowing to directly define a minimal Gtk+ object. The farthest I went in this direction was
module Model =
struct
let create () =
GtkObject.make ~classe:"GObject" []
end
let model () =
new model (Model.create ())
Calling the function model to instantiate the corresponding object yields the message:
Gtk-CRITICAL **: IA__gtk_object_sink: assertion 'GTK_IS_OBJECT (object)' failed
Clearly, there is something fishy here, most probably the parameter list (the empty list in the snippet above) was too small.
LablGTK provides a nice interface to Gtk signaling mechanisms, which allows us to use it without tinkering with GtkSignal and marshalling functions. This interface is provided by GUtil and is neatly documented.
How to use GUtil, as described in the module documentation
To add ML signals to a LablGTK object:
{[
class mywidget_signals obj ~mysignal1 ~mysignal2 = object
inherit somewidget_signals obj
inherit add_ml_signals obj [mysignal1#disconnect; mysignal2#disconnect]
method mysignal1 = mysignal1#connect ~after
method mysignal2 = mysignal2#connect ~after
end
class mywidget obj = object (self)
inherit somewidget obj
val mysignal1 = new signal obj
val mysignal2 = new signal obj
method connect = new mywidget_signals obj ~mysignal1 ~mysignal2
method call1 = mysignal1#call
method call2 = mysignal2#call
end
]}
You can also add ML signals to an arbitrary object; just inherit from ml_signals in place of widget_signals and add_ml_signals.
{[
class mysignals ~mysignal1 ~mysignal2 = object
inherit ml_signals [mysignal1#disconnect; mysignal2#disconnect]
method mysignal1 = mysignal1#connect ~after
method mysignal2 = mysignal2#connect ~after
end
]}
It is now easy to address the points 1, 2, 3, and 4 above:
This is fine
Use GUtil to define new signals instead of GtkSignal
Triggering the new signals is accomplished by the call method of ['a] GUtil.signal.
Since we do not use GtkSignal anymore, there is actually no problem.

Querying a continously running operation for its current state/value in Scala

I have a procedure that continuously updates a value. I want to be able to periodically query the operation for the current value. In my particular example, every update can be considered an improvement and the procedure will eventually converge on a final, best answer, but I want/need access to the intermediate results. The speed with which the loop executes and the time it takes to converge matters.
As an example, consider this loop:
var current = 0
while(current < 100){
current = current + 1
}
I want to be able to get value of current on any loop iteration.
A solution with an Actor would be:
class UpdatingActor extends Actor{
var current : Int = 0
def receive = {
case Update => {
current = current + 1
if (current < 100) self ! Update
}
case Query => sender ! current
}
}
You could get rid of the var using become or FSM, but this example is more clear IMO.
Alternatively, one actor could run the operation and send updated results on every loop iteration to another actor, whose sole responsibility is updating the value and responding to queries about it. I don't know much about "agents" in Akka, but this seems like a potential use case for one.
What are better/alternative ways of doing this using Scala? I don't need to use actors; that was just one solution that came to mind.
Your actor-based solution is ok.
Sending the intermediate result after each change to a "result provider" actor would be a good idea as well if the calculation blocks the actor for a long time and you want to make sure that you can always get the intermediate result. Another alternative would be to make the actual calculator actor a child of the actor that collects the best result. That way the thing acts as a single actor from the outside, and you have the actor that has state (the current best result) separated from the actor that does the computation, which might fail.
An agent would be a solution somewhat between the very low level #volatile/AtomicInteger approach and an Actor. An agent is something that can only be modified by running a transform on it (and there is a queue for transforms), but which has a current state that can always be accessed. It is not location transparent though. so stay with the actor approach if you need that.
Here is how you would solve this with an agent. You have one thread which does a long-running calculation (simulated by Thread.sleep) and another thread that just prints out the best current result in regular intervals (also simulated by Thread.sleep).
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.concurrent._
import akka.agent.Agent
object Main extends App {
val agent = Agent(0)
def computation() : Unit = {
for(i<-0 until 100) {
agent.send { current =>
Thread.sleep(1000) // to simulate a long-running computation
current + 1
}
}
}
def watch() : Unit = {
while(true) {
println("Current value is " + agent.get)
Thread.sleep(1000)
}
}
global.execute(new Runnable {
def run() = computation
})
watch()
}
But all in all I think an actor-based solution would be superior. For example you could do the calculation on a different machine than the result tracking.
The scope of the question is a little wide, but I'll try :)
First, your example is perfectly fine, I don't see the point of getting rid of the var. This is what actors are for: protect mutable state.
Second, based on what you describe you don't need an actor at all.
class UpdatingActor {
private var current = 0
def startCrazyJob() {
while(current < 100){
current = current + 1
}
}
def soWhatsGoingOn: Int = current
}
You just need one thread to call startCrazyJob and a second one that will periodically call soWhatsGoingOn.
IMHO, the actor approach is better, but it's up to you to decide if it's worth importing the akka library just for this use case.