Not working python breakpoints in C thread in pycharm or eclipse+pydev - eclipse

I have a django application using a C++ library (imported via swig).
The C++ library launches own thread which calls callbacks in Python code.
I cannot setup a breakpoint in python code, neither in PyDev nor PyCharm.
Tried also 'gevent compatibility' option too with no luck.
I verified the callbacks are properly called as logging.info dumps what expected. Breakpoints set in other threads work fine. So it seems that python debuggers cannot manage breakpoints in python code called by threads created in non-python code.
Does anyone know a workaround? Maybe there is some 'magic' thread initialization sequence I could use?

You have to setup the debugger machinery for it to work on non-python threads (this is done automatically when a Python thread is created, but when you create a thread for which Python doesn't have any creation hook, you have to do it yourself) -- note that for some frameworks -- such as QThread/Gevent -- things are monkey patched so that we know about the initialization and start the debugger, but for other frameworks you have to do it yourself.
To do that, after starting the thread you have to call:
import pydevd
pydevd.settrace(suspend=False, trace_only_current_thread=True)
Note that if you had put suspend=True, it'd simulate a manual breakpoint and would stop at that point of the code.

This is a follow-up to #fabio-zadrozny answer.
Here is a mixin I've created that my class (which gets callbacks from a C-thread) inherits from.
class TracingMixing(object):
"""The callbacks in the FUSE Filesystem are C threads and breakpoints don't work normally.
This mixin adds callbacks to every function call so that we can breakpoint them."""
def __call__(self, op, path, *args):
pydevd.settrace(suspend=False, trace_only_current_thread=True, patch_multiprocessing=True)
return getattr(self, op)(path, *args)

Related

Function block not updating variable

Alright, so I'm learning codesys in school and I'm using Function-Blocks. However they didn't seem to update when updating local variables, so I made a test, the one you can see below.
As you can see, in the FB below, "GVL.sw1" becomes True, but "a" doesen't. Why does it not become True? I tested a friends code and his worked just fine, but mine doesen't...
https://i.stack.imgur.com/IpPPZ.png
Comment from reddit
You are showing the source code for a program called "main". You have
a task running called "Main_Task". The program and task are not
directly related.
Is "main" being called anywhere.
So i added main to the "main task" and it worked. I have no idea why it didn't work in the real assignment but maybe I'll solve it now that i have gotten this far.
In your example you have 2 programs (PRG): main and PLC_PRG.
Creating a program doesn't mean that it will be executed/run. For that you need to add the program to a Task in the Task Configuration. Each Task will be, by default, executed on every cycle according to the priority they are configured with (you could also have them be executed on an event and etc. instead). When a Task is executed, each program added to that Task will be executed in the order they are placed (you can reorder them any time).
With that said, if you look at your Task Configuration, the MainTask only has the program PLC_PRG added, so only that program will run. The main program that you are inspecting is never even run.

Questions about Celery

I'm trying to learn what Celery is & does and I've gone through some basic tutorials like first steps with Celery and this one, and I have a few questions about Celery:
In the (first steps) tutorial, you start a Celery worker and then you can basically just open an interpreter & call the task defined as:
>>> from tasks import add
>>> add.delay(4, 4)
So my questions are:
What's happening here? add() is a method that we wrote in the tasks file and we're calling add.delay(4,4). So we're calling a method over a method!?
How does this work? How does the 'delay' method get added to 'add'?
Does the Celery worker do the work of adding 4+4? As opposed to the work being done by the caller of that method? - like it would have been if I had just defined a method called add in the interpreter and just executed
add(4,4)
If the answer to 3 is yes, then how does Celery know it has to do some work? All we're doing is - importing a method from the module we wrote and call that method. How does control get passed to the Celery worker?
Also, while answering #4, it'd also be great if you could tell me how you know this. I'd be very curious to know if these things are documented somewhere that I'm missing/failing to understand it, and how I could have known the answer. Thanks much in advance!
What's happening here? add() is a method that we wrote in the tasks file and we're calling add.delay(4,4). So we're calling a method over a method!?
Everything is an object in Python. Everything has properties. Functions/methods also have properties. For example:
def foo(): pass
print(foo.__name__)
This is nothing special syntax-wise.
How does this work? How does the delay method get added to add?
The #app.task decorator does that.
Does the Celery worker do the work of adding 4+4? As opposed to the work being done by the caller of that method?
Yes, the worker does that. Otherwise this would be pretty nonsensical. You're passing two arguments (4 and 4) to the Celery system which passes them on to the worker, which does the actual work, in this case addition.
If the answer to 3 is yes, then how does Celery know it has to do some work? All we're doing is - importing a method from the module we wrote and call that method. How does control get passed to the Celery worker?
Again, the #app.task decorator abstracts a lot of magic here. This decorator registers the function with the celery worker pool. It also adds magic properties to the same method that allow you to call that function in the celery worker pool, namely delay. Imagine this instead:
def foo(): pass
celery.register_worker('foo', foo)
celery.call('foo')
The decorator is essentially just doing that, just without you having to repeatedly write foo in various ways. It's using the function itself as identifier for you, purely as syntactic sugar so you don't have to distinguish much between foo() and 'foo' in your code.

Can the instructions in a thread change during execution? (OS)

I'm currently researching threads in the context of the operating system and I'm unsure if a thread is a set sequence of instructions that can be repeatedly executed or if it is filled and replaced with new instructions by the user or the operating system.
Thanks a bundle!
-Tom
I'm not quite sure what you mean - the compiled instructions for a program are stored in memory and are not changed at runtime (at least for languages which are not JIT-compiled).
A thread is an entirely separate concept from the code itself. A thread gives you the ability to be running at "two places at once" in the code. At a conceptual level, a thread is simply a container for the context that you need at any point in the execution of some code. This means that each thread has a call stack and a set of registers (which are either actually stored in the registers of a processor if the thread is running, or elsewhere if the thread is paused).
Almost all thread libraries work such that a new thread will execute some user-defined function and will then exit. This function can be long-running, just like main() (which is the function executed by the first thread in your process).
If the threads are supported by the OS (ie they are not "green threads"/"fibers") they will exit by calling an OS API which tells the OS it can deallocate any data it has which is associated with that thread.
Sometimes, abstractions are built on top of this mechanism such that a thread or pool of threads will execute a function which simply loops over a queue of tasks to run, but the fundamental mechanism is the same. However, these abstractions are provided by user libraries built on top of the OS threading mechanisms, not by the OS itself.

Pause/Resume embedded python interpreter

Is there any possibility to pause/resume the work of embedded python interpreter in place, where I need? For example:
C++ pseudo-code part:
main()
{
script = "python_script.py";
...
RunScript(script); //-- python script runs till the command 'stop'
while(true)
{
//... read values from some variables in python-script
//... do some work ...
//... write new value to some other variables in python-script
ResumeScript(script); //-- python script resumes it's work where
// it was stopped. Not from begin!
}
...
}
Python script pseudo-code part:
#... do some init-work
while true:
#... do some work
stop # - here script stops and C++-function RunScript()
# returns control to C++-part
#... After calling C++-function ResumeScript
# the work continues from this line
Is this possible to do with Python/C API?
Thanks
I too have recently been searching for a way to manually "drive" an embedded language and I came across this question and figured I'd share a potential workaround.
I would implement the "blocking" behavior either through a socket, or some kind of messaging system. Instead of actually stopping the whole python interpreter, just have it block when it is waiting for C++ to do it's evaluations.
C++ will start the embedded runtime, then enter a loop of some sort that waits for python to "throw the signal" that it's ready. For instance C++ listens on port 5000, starts python, python does work, connects to port 5000 on localhost, then C++ sees the connection and grabs the data from python, performs work on it, then shuffles the data back over the socket to python, where python then receives the data and leaves the blocking loop.
I still need a way to fully pause the virtual runtime, but in your case you could achieve the same thing with a socket and some blocking behavior that uses the socket to coordinate the two pieces of code.
Good luck :)
EDIT: You may be able to hook this "injection" functionality used in this answer to completely stop python. Just modify it to inject a wait-loop perhaps.
Stopping embedded Python

Cython callback causing memory corruption/segfaults

I am interfacing python with a c++ library using cython. I need a callback function that the c++ code can call. I also need to pass a reference to a specific python object to this function. This is all quite clear from the callback demo.
However I get various errors when the callback is called from c++ thread (pthread):
Pass function pointer and class/object (as void*) to c++
Store the pointers within c++
Start new thread (pthread) running a loop
Call function using the stored function pointer and pass back the class pointer (void*)
In python: cast void* back to class/object
Call a method of above class/object (Error)
Steps 2 and 3 are in c++.
The errors are mostly segmentation faults but sometimes I get complaints about some low level python calls.
I have the exact same code where I create a thread in python and call the callback directly. This works OK. So the callback itself is working.
I do need a separate thread running in c++ since this thread is communicating with hardware and on occasion calling my callback.
I have also triple-checked all the pointers that are being passed around. They point to valid locations.
I suspect there are some problems when using cython classes from a c++ thread..?
I am using Python 2.6.6 on Ubuntu.
So my question is:
Can I manipulate python objects from a non-python thread?
If not, is there a way can make the thread python-compatible? (pthread)
This is the minimal callback that already causes problems when called from c++ thread:
cdef int CheckCollision(float* Angles, void* user_data):
self = <CollisionDetector>user_data
return self.__sizeof__() # <====== Error
No, you must not manipulate Python objects without acquiring GIL in the first place. You must use PyGILState_Ensure() + PyGILState_Release() (see the PyGILState_Ensure documentation)
You can ensure that your object will be not deleted by python if you explicitly take the reference, and release it when you're not using it anymore with Py_INCREF() and Py_DECREF(). If you pass the callback to your c++, and if you don't have a reference taken anymore, maybe your python object is freed, and the crash happen (see the Py_INCREF documentatation).
Disclamer: i'm not saying this is your issue, just giving you tips to track down the bug :)