How to handle flow removal event - pox

I want to know when a switch removes any flow rule after hard_timeout is passed. I know ofp_flow_mod has an attribute flags where I can test OFPFF_SEND_FLOW_REM. I made the following in my controller:
def handle_flowRemoval(self, event):
msg = event.parsed
if msg.flags == of.OFPFF_SEND_FLOW_REM:
print ("The switch %s has raised the removal event!" % event.dpid)
In fact it does not trigger this method after hard_timeout is expired. I don't know why. Can anyone suggest how to fix it.
Tank you

If you are sure there are flows installed on the switch best approach would be to add the topology module and listen to the FlowRemoved event mixin
In your main class add something like
core.openflow.addListenerByName("FlowRemoved", self._handle_flow_removal)
and then somewhere to read the event
def _handle_flow_removal (self, event):
"""
handler flow removed event here
"""
print event.__dict__() # to get available info
The FlowRemoved event mixin is in the POX topology.py module at line 172
https://github.com/noxrepo/pox/blob/carp/pox/openflow/topology.py

Related

how to run bpy callback on workspace tools change

How to add a pre-draw hook to current context workspace.tools change?
I attempted to get there using bpy.types.SpaceView3D.draw_handler_add(...) which as it runs on every draw, checks if workspace.tools changed, and if it changed, run my callback, but my callback wants to add its own SpaceView3D.draw_handler_add and doing it this way adds it a frame-too-late, leaving the view port undrawn until a user event repaints the screen.
I found this post online
https://devtalk.blender.org/t/update-property-when-active-tool-changes/11467/12
summary: maybe there is a mainline callback new
https://developer.blender.org/D10635
AFWS Jan '20
#kaio
This seem like a better solution. It’s kind of a mystery code, cause I
couldn’t figure out where you got that code info ,but then started
looking at the space_toolsystem_common.py file. kaio AFWS Jan '20
Just realized there might be a cleaner way of getting callbacks for
active tools using the msgbus. Since workspace tools aren’t rna
properties themselves, figured it’s possible to monitor the
bpy_prop_collection which changes with the tool.
The handle is the workspace itself, so shouldn’t have to worry about
keeping a reference. The subscription lasts until a new file is
loaded, so add a load_post callback which reapplies it.
Note this doesn’t proactively subscribe to workspaces added
afterwards. Might need a separate callback for that :joy:
import bpy
def rna_callback(workspace):
idname = workspace.tools[-1].idname
print(idname)
def subscribe(workspace):
bpy.msgbus.subscribe_rna(
key=ws.path_resolve("tools", False),
owner=workspace,
args=(workspace,),
notify=rna_callback)
if __name__ == "__main__":
ws = bpy.context.workspace
subscribe(bpy.context.workspace)
# Subscribe to all workspaces: if 0:
for ws in bpy.data.workspaces:
subscribe(bpy.context.workspace)
# Clear all workspace subscriptions if 0:
for ws in bpy.data.workspaces:
bpy.msgbus.clear_by_owner(ws)

How to write correct transitions?

I have a bot code that needs to be connected to a state machine, how to implement it correctly, please help me, I understand in theory how to do this, but in practice, it does not work
states.py
from transitions import Machine, State
from main import *
states = ['START',
'WAITING_YES',
'CHOOSE SIZE',
'GO PAY'
'CHOOSE PAY METHOD',
'REPEATING ORDER',
'FINISH']
machine = Machine(states=states, initial='START')
machine.add_transition('melt', source='START', dest='WAITING_YES')
if start:
print('-------------------',machine.state,'-------------------')
machine.add_ordered_transitions()
if waiting_yes:
machine.to_WAITING_YES()
print('-------------------',machine.state,'-------------------')
machine.next_state()
elif choose_size:
print('-------------------',machine.state,'-------------------')
machine.next_state()
elif choose_pay_method:
print('-------------------',machine.state,'-------------------')
machine.next_state()
elif repeating_order:
print('-------------------',machine.state,'-------------------')
machine.next_state()
elif finish:
print('-------------------',machine.state,'-------------------')
(In theory, the code should display the status if a person uses a bot, but later I want to remove this)
full code
A common use case for the application of state machines is to get rid of huge 'if-then-else'-constructs and process events 'context-sensitive', meaning that what happens when an event is received depends on the current state of the machine/model.
While this is probably not of interest for maria_hoffman any longer, google might lead someone here with the same intention:
Let's assume we want to build a simple bot that is capable of adding two numbers. We start with defining the necessary states.
states = ["INIT", "WAITING", "ADD_1", "ADD_2", "QUIT"]
We start from INIT and have a WAITING state where operation instruction are received. We could skip this one but our bot might be extended in the future to also support multiplication. In ADD_1 we expect the first number and in ADD_2 the second number for our sum. When in state QUIT we want the system to shutdown.
Next, we need to define the actual transitions that should happen:
transitions = [
dict(trigger='next', source='WAITING', dest='ADD_1', conditions=lambda x: x == "add"),
dict(trigger='next', source='WAITING', dest='QUIT', conditions=lambda x: x == "quit"),
dict(trigger='next', source='WAITING', dest='WAITING', before="show_error"),
dict(trigger='next', source='ADD_1', dest='ADD_2', before="store_value"),
dict(trigger='next', source='ADD_2', dest='WAITING', before="get_result"),
dict(trigger='reset', source='*', dest='WAITING'),
]
First, we see that we have just two events: next and reset. What happens when next is triggered, depends on the current state. In WAITING we process three possibilities: First, when the parameter passed with event next is equal to add, we transition to ADD_1 and wait for the first number to proces. If the parameter is equal to quit, we transition to QUIT and shutdown the system. If both condition checks fail we will use the third transition which will exit and re-enter WAITING and call a method called show_error before doing so. When transitioning from ADD_1 to ADD_2 we call a function to store the passed value. We need to remember it for get_result which is called when next is received in state ADD_2. Lastly, we have a reset event to roll back things if something did not work out.
Now we are almost done, we just need to define some prompts and the aforementioned methods show_error, store_value and get_result. We create a simple model for this. The idea is to show prompts depending on the state that has been entered. on_enter_<state> is the right tool for this job. We also intialize self.first in __init__ as a field to store the value of the first number that is passed in ADD_1:
class Model:
def __init__(self):
self.first = 0
def on_enter_WAITING(self, *args):
print("Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program:", end=' ')
def on_enter_ADD_1(self, *args):
print("Please enter the first value:", end=' ')
def on_enter_QUIT(self, *args):
print("Goodbye!")
def store_value(self, value):
self.first = int(value)
print("Please enter the second value:", end=' ')
def get_result(self, value):
val = int(value)
print(f"{self.first} + {val} = {self.first + val}")
def show_error(self, *args):
print("Sorry, I cannot do that.")
Note that when we want to pass arguments to callbacks, all callbacks need to be able to deal with it. The documentation of transitions states:
There is one important limitation to this approach: every callback function triggered by the state transition must be able to handle all of the arguments. This may cause problems if the callbacks each expect somewhat different data.
So, when we don't need the actual input value, we just put *args in the signature to communicate this.
That's it. Now we tie everything together and implement some rudimentary error checks and we are good to go. We create a model instance and pass it to the machine. When we receive input we pass it to the model via next and let the model do the heavy lifting. While the model is not in state QUIT we will wait for the next input:
model = Model()
machine = Machine(model, states=states, transitions=transitions, initial='INIT')
model.to_WAITING()
while not model.is_QUIT():
inp = input()
try:
model.next(inp)
except ValueError:
print("Oh no! Something went wrong. Let's try again!")
model.reset()
This could be a conversation with the bot:
Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program: add
Please enter the first value: 123
Please enter the second value: 4
123 + 4 = 127
Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program: call
Sorry, I cannot do that.
Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program: add
Please enter the first value: foo
Oh no! Something went wrong. Let's try again!
Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program: add
Please enter the first value: 123
Please enter the second value: baz
Oh no! Something went wrong. Let's try again!
Hello, if you want to add two numbers enter 'add'. Enter 'quit' to close the program: quit
Goodbye!

uvm register write is stuck and never return

I have some block of register along with corresponding register adaptor setup to translate into some bus protocol.
When I called the write method to one of my register, I could see the transaction going on, and driver complete its job, but write is stuck somewhere.
Please see excerpt of driver and sequence below:
// ...uvm driver
forever begin
seq_item_port.get_next_item(req);
$display("DEBUG A");
// ... do transaction
seq_item_port.item_done();
$display("DEBUG B");
end
// ... sequence
$display("START WRITE");
my_reg_block.my_reg1.write(
$display("DONE WRITE");
The result:
START WRITE
DEBUG A
DEBUG B
and then simulation stuck there - I never see DONE WRITE.
I am quite sure all the connect, set_sequencer has been made properly - otherwise my driver shouldn't see transaction in the first place. And this is pretty simple test - only doing that write.
Any idea why it is stuck in register write method eventhough the driver seems to have completed the transaction? I probably missed something.
In uvm_reg_map::do_bus_write(...) there's the following code snippet that handles the bus request for a register access:
bus_req.set_sequencer(sequencer);
rw.parent.start_item(bus_req,rw.prior);
if (rw.parent != null && i == 0)
rw.parent.mid_do(rw);
rw.parent.finish_item(bus_req);
bus_req.end_event.wait_on();
Notice the end_event.wait_on(). This event is normally triggered on a sequence item by the sequencer, once item_done() was called and finish_item() returns:
`ifndef UVM_DISABLE_AUTO_ITEM_RECORDING
sequencer.end_tr(item);
`endif
It's possible to turn this off using the define, which is what I guess is happening in your case.

Can I use Akka's TestEventListener without it polluting my test ouput via STDOUT?

As part of my testing, I am using EventFilter and TestEventListener to listen to log messages. However, doing so causes there to be a massive flood in my command prompt... which makes it very hard to see my tests happening.
Sample Code:
it("should send a welcome message to the user", SystemFortressTest) {
val stub = new SubFortressBuildingPermitRefTraitImplStub
EventFilter.debug(message = "SystemFortressExchange: Received Message: SystemOutput(List(JITMP Booted))", occurrences = 1) intercept {
stub.buildASubFortress(SystemFortressBlueprintRef)
}
}
this code works, but it floods me with debug level data because the TestEventListener prints to STDOUT by default (as it subclasses the default logger which is just straight STDOUT only logging)
I can roll my own logging abstraction that sits on top of Akka's and filter messages from there before it ever hits Akka's stuff... so it wouldn't pollute my command prompt... but that's an awful lot of fuss if there is a similar solution already available.
Problem is, if I use the SL4J Logger, it doesn't work with EventFilter.
What I do is:
akka.loglevel = DEBUG
akka.loggers = ["akka.event.slf4j.Slf4jLogger", "akka.testkit.TestEventListener"]
.. and then in my tests:
system.eventStream.publish(Mute(EventFilter.info()))
system.eventStream.publish(Mute(EventFilter.debug()))
This way:
errors and warnings are reported twice (but those should be fixed anyway :) )
debug and info messages are only reported through slf4j
you can still use eventfilters to check for specific messages (perhaps after unmuting them first)
Note that generally I consider testing for log messages a code smell though - usually better to check for more 'observable' behaviour. Perhaps publish an event to the eventstream.
You can configure log levels or completely turn off the logger likewise with other loggers So when you configure TestEventListener you can specify the logging as below.
akka.loggers=["akka.testkit.TestEventListener"]
akka.loglevel = OFF
Hope this helps
A solution is to create a new listener class that extends the TestEventListener and overrides the print method to either do nothing, or log through your preferred logging solution instead of printing directly to stdout. You then can specify your custom event listener as a logger in your application.conf under the akka.loggers option. (see details at https://doc.akka.io/docs/akka/2.5/scala/testing.html)
So your event listener would be:
package mypackage
import akka.testkit.TestEventListener
class SilentTestEventListener extends TestEventListener {
override def print(event: Any): Unit = ()
}
And you would add the following to application.conf:
akka {
loggers = [mypackage.SilentTestEventListener]
}
If you would turn of logging by altering the log level, or filter noisy logs by using an event filter, then you also wouldn't be able to listen to those logs in your tests, because that is also done with event filters: the logger only executes the filters on the logs until it finds the first one that filters out the log message. If it finds another such filter before the filter used in the test, then your test will never get notified about the log entry.
Another nicer solution would be to implement an event listener that defines its own way of handling logs instead of inheriting and modifying the behavior of the StdOutLogger (since you would expect a subclass of the StdOutLogger to log to stdout...), but this would require more effort than the hacky solution above, since you would have to duplicate the functionality of the TestEventListener.
When using SL4J, here is how we can setup. You will have to turn off sl4j logging to use TestEventListner for testing DEBUG logs.
class MyClassSpec extends TestKit(MyClassSpec.system) {
"MyClass" should "test something" in {
val actor = MyClassSpec.system.actorOf(Props(new MyClass()))
EventFilter.debug(message = s"Testing something", occurrences = 1) intercept {
actor ! "test"
}
}
}
object MyClassSpec {
val testConfiguation = ConfigFactory.parseString("""
akka.loglevel = DEBUG
akka.loggers = ["akka.testkit.TestEventListener"]
akka.use-slf4j=off
""")
implicit val system: ActorSystem = {
ActorSystem("MyClassSystem", testConfiguation )
}
}

Is there any way to inter communicate with modules in boilerplatejs?

Regarding the BoilerplateJs example, how should we adjust those modules to be intercommunicate in such a way once the user done any change to one module, the other related modules should be updated with that change done.
For example, if there is a module to retrieve inputs from user as name and sales and another module to update those retrieved data in a table or a graph, can you explain with some example ,how those inter connection occurs considering event handling?
Thanks!!
In BoilerplateJS, each of your module will have it's own moduleContext object. This module context object contains two methods 'listen' and 'notify'. Have a look at the context class at '/src/core/context.js' for more details.
The component that need to 'listen' to the event, should register for the event by specifying the name of the event and callback handler. Component that raise the event should use 'notify' method to let others know something interesting happened (optionally passing a parameter).
Get an update of the latest BoilerplateJS code from GitHub. I just committed changes with making clickCounter a composite component where 'clickme component' raising an event and 'lottery component' listening to the event to respond.
Code for notifying the Event:
moduleContext.notify('LOTTERY_ACTIVITY', this.numberOfClicks());
Code for listening to the Event:
moduleContext.listen("LOTTERY_ACTIVITY", function(activityNumber) {
var randomNum = Math.floor(Math.random() * 3) + 1;
self.hasWon(randomNum === activityNumber);
});
I would look at using a Publish-Subscribe library, such as Amplify. Using this technique it is easy for one module to act as a publisher of events and others to register as subscribers, listening and responding to these events in a highly decoupled manner.
As you are already using Knockout you might be interested in first trying Ryan Niemeyer's knockout-postbox plugin first. More background on this library is available here including a demo fiddle. You can always switch to Amplify later if you require.