I'm learning to run some of the scripts from POX SDN. Reference https://github.com/noxrepo/pox
Like to ask py.py is started? There is nothing like c program that has a main() and code starts to run from there. In the py.py script, it is all def and class. The last def is "def launch() which I never see any lines calling for launch()...
It depends what script do you want to run. And what kind of controller.
For example if you want to run of_tutorial.py of pox controller:
./pox.py log.level --DEBUG misc.of_tutorial.py
Related
Alright, so I'm learning codesys in school and I'm using Function-Blocks. However they didn't seem to update when updating local variables, so I made a test, the one you can see below.
As you can see, in the FB below, "GVL.sw1" becomes True, but "a" doesen't. Why does it not become True? I tested a friends code and his worked just fine, but mine doesen't...
https://i.stack.imgur.com/IpPPZ.png
Comment from reddit
You are showing the source code for a program called "main". You have
a task running called "Main_Task". The program and task are not
directly related.
Is "main" being called anywhere.
So i added main to the "main task" and it worked. I have no idea why it didn't work in the real assignment but maybe I'll solve it now that i have gotten this far.
In your example you have 2 programs (PRG): main and PLC_PRG.
Creating a program doesn't mean that it will be executed/run. For that you need to add the program to a Task in the Task Configuration. Each Task will be, by default, executed on every cycle according to the priority they are configured with (you could also have them be executed on an event and etc. instead). When a Task is executed, each program added to that Task will be executed in the order they are placed (you can reorder them any time).
With that said, if you look at your Task Configuration, the MainTask only has the program PLC_PRG added, so only that program will run. The main program that you are inspecting is never even run.
I have a django application using a C++ library (imported via swig).
The C++ library launches own thread which calls callbacks in Python code.
I cannot setup a breakpoint in python code, neither in PyDev nor PyCharm.
Tried also 'gevent compatibility' option too with no luck.
I verified the callbacks are properly called as logging.info dumps what expected. Breakpoints set in other threads work fine. So it seems that python debuggers cannot manage breakpoints in python code called by threads created in non-python code.
Does anyone know a workaround? Maybe there is some 'magic' thread initialization sequence I could use?
You have to setup the debugger machinery for it to work on non-python threads (this is done automatically when a Python thread is created, but when you create a thread for which Python doesn't have any creation hook, you have to do it yourself) -- note that for some frameworks -- such as QThread/Gevent -- things are monkey patched so that we know about the initialization and start the debugger, but for other frameworks you have to do it yourself.
To do that, after starting the thread you have to call:
import pydevd
pydevd.settrace(suspend=False, trace_only_current_thread=True)
Note that if you had put suspend=True, it'd simulate a manual breakpoint and would stop at that point of the code.
This is a follow-up to #fabio-zadrozny answer.
Here is a mixin I've created that my class (which gets callbacks from a C-thread) inherits from.
class TracingMixing(object):
"""The callbacks in the FUSE Filesystem are C threads and breakpoints don't work normally.
This mixin adds callbacks to every function call so that we can breakpoint them."""
def __call__(self, op, path, *args):
pydevd.settrace(suspend=False, trace_only_current_thread=True, patch_multiprocessing=True)
return getattr(self, op)(path, *args)
I'm trying to learn what Celery is & does and I've gone through some basic tutorials like first steps with Celery and this one, and I have a few questions about Celery:
In the (first steps) tutorial, you start a Celery worker and then you can basically just open an interpreter & call the task defined as:
>>> from tasks import add
>>> add.delay(4, 4)
So my questions are:
What's happening here? add() is a method that we wrote in the tasks file and we're calling add.delay(4,4). So we're calling a method over a method!?
How does this work? How does the 'delay' method get added to 'add'?
Does the Celery worker do the work of adding 4+4? As opposed to the work being done by the caller of that method? - like it would have been if I had just defined a method called add in the interpreter and just executed
add(4,4)
If the answer to 3 is yes, then how does Celery know it has to do some work? All we're doing is - importing a method from the module we wrote and call that method. How does control get passed to the Celery worker?
Also, while answering #4, it'd also be great if you could tell me how you know this. I'd be very curious to know if these things are documented somewhere that I'm missing/failing to understand it, and how I could have known the answer. Thanks much in advance!
What's happening here? add() is a method that we wrote in the tasks file and we're calling add.delay(4,4). So we're calling a method over a method!?
Everything is an object in Python. Everything has properties. Functions/methods also have properties. For example:
def foo(): pass
print(foo.__name__)
This is nothing special syntax-wise.
How does this work? How does the delay method get added to add?
The #app.task decorator does that.
Does the Celery worker do the work of adding 4+4? As opposed to the work being done by the caller of that method?
Yes, the worker does that. Otherwise this would be pretty nonsensical. You're passing two arguments (4 and 4) to the Celery system which passes them on to the worker, which does the actual work, in this case addition.
If the answer to 3 is yes, then how does Celery know it has to do some work? All we're doing is - importing a method from the module we wrote and call that method. How does control get passed to the Celery worker?
Again, the #app.task decorator abstracts a lot of magic here. This decorator registers the function with the celery worker pool. It also adds magic properties to the same method that allow you to call that function in the celery worker pool, namely delay. Imagine this instead:
def foo(): pass
celery.register_worker('foo', foo)
celery.call('foo')
The decorator is essentially just doing that, just without you having to repeatedly write foo in various ways. It's using the function itself as identifier for you, purely as syntactic sugar so you don't have to distinguish much between foo() and 'foo' in your code.
My workflow is: start ipcontroller/ipengines, then run 'python test_script.py' several times with different parameters. This script includes a map_async call. The ipengines don't recognize changes to the code between calls to the script, and static class variables are not reset to their defaults. It seems like a magic %reset call would do the trick, but attempting to execute this command on the ipengines does not seem to do anything.
My solution to this was to use the ipengine to start a new subprocess which completes the desired operations. This subprocess has its own memory. Not ideal, but provides the desired functionality.
I've been doing unit testing and I ran into this weird bad problem.
I'm doing user authentication tests with some of my services/mappers.
I run, all together about 307 tests right now. This only really happens when I run them all in one batch.
I try to instantiate only one Zend_Application object and use that for all my tests. I only instantiate it to take care of the db connection, the session, and the autoloading of my classes.
Here is the problem.
Somewhere along the line of tests the __destruct method of the Zend_Session_SaveHandler_DbTable gets called. I have NO IDEA WHY? But it does.
The __destruct method will render any writing to my session objects useless because they are marked as read-only.
I have NO clue why the destruct method is being called.
It gets called many tests before my authentication tests. If I run each folder of tests individually there is no problem. It's only when I try to run all 307 tests. I do have some tests that do database work but my code is not closing the db connections or destructing the save handler.
Does anyone have any ideas on why this would be happening and why my Zend_Session_SaveHandler_DbTable is being destructed? Does this have anything to do with the lifetime that it has by default?
I think that what was happening is that PHPUnit was doing garbage collection. Whenever I ran the 307 tests the garbage collector had to run and it probably destroyed the Zend_Session_SaveHandler_DbTable for some reason.
This would explain why it didn't get destroyed when fewer tests were being run.
Or maybe it was PHP doing the garbage collection, that makes more sense.
Either way, my current solution is to create a new Zend_Application object for each test class so that all the tests within that class have a fresh zend_application object to work with.
Here is some interesting information.
I put an echo statement in the __destruct method of the savehandler.
The method was being called ( X + 1 ) times, where X was the number of tests that I ran. If I ran 50 test I got 51 echos, 307 tests then 308 echos, etc.
Here is the interesting part. If I ran only a few tests, the echos would all come at the END of the test run. If I tried to run all 307 tests, 90 echos would show up after what I assumed were 90 test. The rest of the echos would then come up at the end of the remaining tests. The number of echos was X + 1 again, or in this case 308.
So, this is where I'm assuming that this has something to do with either the tearDown method that PHPUnit calls, or the PHP garbage collector. Maybe PHPUnit invokes the garbage collector at teardown. Who knows but I'm glad I got it working now as my tests were all passing beforehand.
If any of you have a better solution then let me know. Maybe I uncovered a flaw in my code, phpunit, or zend, that hadn't been known before and there is some way to fix it.
It's an old question, but I have just had the same problem and found the solution here. I think it's the right way to solve it.
Zend_Session::$_unitTestEnabled = true;