I have a C++ library with many functions exported to Python using PyBind11. I am sure that these functions are thread-safe and would like to maximize the performance of multi-threading in Python. Normally we have to release GIL like:
PYBIND11_MODULE(MyModule, m) {
m.def("foo", &foo, py::call_guard<py::gil_scoped_release>());
m.def("bar", &bar, py::call_guard<py::gil_scoped_release>());
...
}
for all functions, including my class methods.
Is there a way to set GIL-release as library/application default so that I don't have to write py::call_guard<...> all the time and possibly gain some performance gain since I am sure there is no hidden function on the code path holding the GIL?
You can write macros to replace def or try to patch pybind11 yourself, but there's no official way I'm aware of.
Related
I know what the $crate variable is, but as far as I can tell, it can't be used inside procedural macros. Is there another way to achieve a similar effect?
I have an example that roughly requires me to write something like this using quote and nightly Rust
quote!(
struct Foo {
bar: [SomeTrait;#len]
}
)
I need to make sure SomeTrait is in scope (#len is referencing an integer outside the scope of the snippet).
I am using procedural macros 2.0 on nightly using quote and syn because proc-macro-hack didn't work for me. This is the example I'm trying to generalize.
Since Rust 1.34, you can use extern crate self as my_crate, and use my_crate::Foo instead of $crate::Foo.
https://github.com/rust-lang/rust/issues/54647
https://github.com/rust-lang/rust/pull/57407
(Credit: Neptunepink ##rust irc.freenode.net)
Based on replies from https://github.com/rust-lang/rust/issues/38356#issuecomment-412920528, it looks like there is no way to do this (as of 2018-08), neither to refer to the proc-macro crate nor to refer to any other crate unambiguously.
In Edition 2015 (classic Rust), you can do this (but it's hacky):
use ::defining_crate::SomeTrait in the macro
within third-party crates depending on defining_crate, the above works fine
within defining_crate itself, add a module in the root:
mod defining_crate { pub use super::*; }
In Edition 2018 even more hacky solutions are required (see this issue), though #55275 may give us a simple workaround.
You can wrap your proc-macro inside a declarative macro, and pass the $crate identifier to your proc-macro for reuse (see this commit for example). It will create a proc_macro::Ident value with the special $crate identifier.
Note that you cannot manually create such an identifier, since $ is normally invalid inside identifiers.
I implement MyClass containing the method method() and I store the instance in $_ENV['key'] in test.php. Also in test.php the code completion works when I type $_ENV['key']->.
In test2.php I include test.php and the code completion does not work any more for $_ENV['key']->.
Does anyone know how to enable this in PhpStorm?
AFAIK type tracking for arrays works within the same file only.
You can bypass it via intermediate variable (yes, it's not a nicest solution) and small PHPDoc comment, like this:
/** #var MyClass $myVar */
$myVar = $_ENV['key'];
$myVar->
P.S.
In general, I'd suggest not using global arrays this way (or even not using global vars at all -- only very basic stuff during bootstrap, if possible). Instead (based on your code) I may suggest using some static class (as one of the alternatives) with dedicated field where you can easily give type hint (via PHPDoc) to a class field -- this way IDE will always know hat type it is. Current PHP versions (5.5 and especially 5.6) work with objects nearly as fast as with arrays, even leading in (smaller) memory consumption.
Obviously, such suggestion does not really apply if this code is not yours.
I've done some searching on this, but I cannot find info. I'm building an application inside sinatra, and using the coffeescript templating engine. By default the compiled code is wrapped as such:
(function() {
// code
}).call(this);
I'd like to remove that using the --bare flag, so different files can access classes and so forth that I'm defining. I realize that having it more contained helps against variable conflicts and so forth, but I'm working on two main pieces here. One is the business logic, and arrangement of data in class structures. The other is the view functionality using raphaeljs. I would prefer to keep these two pieces in separate files. Since the two files wrapped as such cannot access the data, it obviously won't work. However, if you can think of a better solution than using the --bare option, I'm all ears.
Bare compilation is simply a bad practice. Each file should export to the global scope only the public objects that matter to the rest of your app.
# foo.coffee
class Foo
constructor: (#abc) ->
privateVar = 123
window.Foo = Foo # export
Foo is now globally available. Now if that pattern isn't practical, maybe you should rethink your structure a bit. If you have to export too may things, you nest and namespace things better, so that more data can be exposed through fewer global variables.
I support Alex's answer, but if you absolutely must do this, I believe my answer to the same question for Rails 3.1 is applicable here as well: Put the line
Tilt::CoffeeScriptTemplate.default_bare = true
somewhere in your application.
I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app.
Can you clarify why you do not want to use the dictionaries?
Another thing you could do is process the string in C++ and do string substitution before you eval the code....
Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you.
Do you use table-of-contents for listing all the functions (and maybe variables) of a class in the beginning of big source code file? I know that alternative to that kind of listing would be to split up big files into smaller classes/files, so that their class declaration would be self-explanatory enough.. but some complex tasks require a lot of code. I'm not sure is it really worth it spending your time subdividing implementation into multiple of files? Or is it ok to create an index-listing additionally to the class/interface declaration?
EDIT:
To better illustrate how I use table-of-contents this is an example from my hobby project. It's actually not listing functions, but code blocks inside a function.. but you can probably get the idea anyway..
/*
CONTENTS
Order_mouse_from_to_points
Lines_intersecting_with_upper_point
Lines_intersecting_with_both_points
Lines_not_intersecting
Lines_intersecting_bottom_points
Update_intersection_range_indices
Rough_method
Normal_method
First_selected_item
Last_selected_item
Other_selected_item
*/
void SelectionManager::FindSelection()
{
// Order_mouse_from_to_points
...
// Lines_intersecting_with_upper_point
...
// Lines_intersecting_with_both_points
...
// Lines_not_intersecting
...
// Lines_intersecting_bottom_points
...
// Update_intersection_range_indices
for(...)
{
// Rough_method
....
// Normal_method
if(...)
{
// First_selected_item
...
// Last_selected_item
...
// Other_selected_item
...
}
}
}
Notice that index-items don't have spaces. Because of this I can click on one them and press F4 to jump to the item-usage, and F2 to jump back (simple visual studio find-next/prevous-shortcuts).
EDIT:
Another alternative solution to this indexing is using collapsed c# regions. You can configure visual studio to show only region names and hide all the code. Of course keyboard support for that source code navigation is pretty cumbersome...
I know that alternative to that kind of listing would be to split up big files into smaller classes/files, so that their class declaration would be self-explanatory enough.
Correct.
but some complex tasks require a lot of code
Incorrect. While a "lot" of code be required, long runs of code (over 25 lines) are a really bad idea.
actually not listing functions, but code blocks inside a function
Worse. A function that needs a table of contents must be decomposed into smaller functions.
I'm not sure is it really worth it spending your time subdividing implementation into multiple of files?
It is absolutely mandatory that you split things into smaller files. The folks that maintain, adapt and reuse your code need all the help they can get.
is it ok to create an index-listing additionally to the class/interface declaration?
No.
If you have to resort to this kind of trick, it's too big.
Also, many languages have tools to generate API docs from the code. Java, Python, C, C++ have documentation tools. Even with Javadoc, epydoc or Doxygen you still have to design things so that they are broken into intellectually manageable pieces.
Make things simpler.
Use a tool to create an index.
If you create a big index you'll have to maintain it as you change your code. Most modern IDEs create list of class members anyway. it seems like a waste of time to create such index.
I would never ever do this sort of busy-work in my code. The most I would do manually is insert a few lines at the top of the file/class explaining what this module did and how it is intended to be used.
If a list of methods and their interfaces would be useful, I generate them automatically, through a tool such as Doxygen.
I've done things like this. Not whole tables of contents, but a similar principle -- just ad-hoc links between comments and the exact piece of code in question. Also to link pieces of code that make the same simplifying assumptions that I suspect may need fixing up later.
You can use Visual Studio's task list to get a listing of certain types of comment. The format of the comments can be configured in Tools|Options, Environment\Task List. This isn't something I ended up using myself but it looks like it might help with navigating the code if you use this system a lot.
If you can split your method like that, you should probably write more methods. After this is done, you can use an IDE to give you the static call stack from the initial method.
EDIT: You can use Eclipse's 'Show Call Hierarchy' feature while programming.