Boost python, calling function objects with a namespace - boost-python

I am embedding python in my C++ application, using boost python.
I would like to be able to call a boost python function object, and associate a global name space with that function call. Specifically, the simplified relevant code is:
bp::object main = bp::import("__main__");
bp::object main_namespace = main.attr("__dict__");
//Put the function name runPyProg in the main_namespace
bp::object PyProg = exec(
"import cStringIO\n"
"import sys\n"
"sys.stderr = cStringIO.StringIO()\n"
"def runPyProg(exp):\n"
" print exp\n"
" exec(exp)\n"
" return\n"
"\n",main_namespace);
//Now call the python function runPyProg with an argument
bp::object py_fn = main.attr("runPyProg");
py_fn(expStr)
I know that when I use the boost python exec() function, I can send in the global namespace, as shown above. My question is how do I associate main_namespace with the python function when I call py_fn? My final goal is that local variables from runPyProg will be placed in the main_namespace.
Thank you.

If I understand the question correctly, then it should be as simple as specifying the context in which exec will execute. A function or method can access the namespace in which it is defined via globals(). Thus, calling globals() from within runPyProg() will return the Python equivalent of main_namespace. Additionally, exec takes two optional arguments:
The first argument specifies the dictionary that will be used for globals(). If the second argument is omitted, then it is also used for locals().
The second argument specifies the dictionary that will be used for locals(). Variable changes occurring within exec are applied to locals().
Therefore, change:
exec exp
to
exec exp in globals()
and it should provide the desired behavior, where exp can interact with global variables in main_namespace.
Here is a basic example:
#include <boost/python.hpp>
int main()
{
Py_Initialize();
namespace python = boost::python;
python::object main = python::import("__main__");
python::object main_namespace = main.attr("__dict__");
//Put the function name runPyProg in the main_namespace
python::exec(
"def runPyProg(exp):\n"
" print exp\n"
" exec exp in globals()\n"
" return\n"
"\n", main_namespace);
// Now call the python function runPyProg with an argument
python::object runPyProg = main.attr("runPyProg");
// Set x in python and access from C++.
runPyProg("x = 42");
std::cout << python::extract<int>(main.attr("x")) << std::endl;
// Set y from C++ and access within python.
main.attr("y") = 100;
runPyProg("print y");
// Access and modify x in python, then access from C++.
runPyProg("x += y");
std::cout << python::extract<int>(main.attr("x")) << std::endl;
}
Commented output:
x = 42 // set from python
42 // print from C++
// y set to 100 from C++
print y // print y from python
100 //
x += y // access and modify from python
142 // print x from C++

Related

Can the equivalent of Tcl "chan push" be implemented in C code?

I have an imbedded Tcl interpreter and want to redirect its stderr and stdout to a console widget in the application.
Using a chan push command for stderr seems to work (not much testing yet), like explained here:
TCL: Redirect output of proc to a file
I could have a file with the required tcl namespace definition, etc, and do a Tcl_Eval to source that script after creating an interp with Tcl_CreateInterp.
Can I do the same thing using Tcl C library calls instead of running the Tcl commands via a Tcl_Eval?
To implement a channel transformation in C, you first have to define a Tcl_ChannelType structure. Such a structure specifies a name for the transformation and pointers to functions for the different operations that may be done on a channel. Next, you implement the functions that perform those operations. The most important ones are inputProc and outputProc. You also have to implement a watchProc. The pointers for other operations can be set to NULL, if you don't need them.
For your example it may look something like:
static const Tcl_ChannelType colorChannelType = {
"color",
TCL_CHANNEL_VERSION_5,
NULL,
ColorTransformInput,
ColorTransformOutput,
NULL, /* seekProc */
NULL, /* setOptionProc */
NULL, /* getOptionProc */
ColorTransformWatch,
NULL, /* getHandleProc */
NULL, /* close2Proc */
NULL, /* blockModeProc */
NULL, /* flushProc */
NULL, /* handlerProc */
NULL, /* wideSeekProc */
NULL,
NULL
};
Then, when you want to push the transformation onto a channel:
chan = Tcl_StackChannel(interp, &colorChannelType, clientData,
Tcl_GetChannelMode(channel), channel);
For a complete example from the Tcl sources, see tclZlib.c
Not really an answer to my question, but maybe it will help someone to see what works by using a Tcl_Eval to show the tcl code that does the redirection.
proc redir_stdout {whichChan args} {
switch -- [lindex $args 0] {
initialize {
return {initialize write finalize}
}
write {
::HT_puts $whichChan [lindex $args 2]
}
finalize {
}
}
}
chan push stderr [list redir_stdout 1]
chan push stdout [list redir_stdout 2]
Both the chan push commands use the same proc, but pass an different identifier (1 or 2) to indicate whether stdout or stderr was the originator of the output.
HT_puts is an extension provided by the C code:
Tcl_CreateObjCommand(interp,"HT_puts",putsCmd,(ClientData) NULL,NULL);
int TclInterp::putsCmd(ClientData ,Tcl_Interp *,int objcnt,Tcl_Obj * CONST *objv)
{
if (objcnt != 3)
return TCL_ERROR;
int length;
int whichChan;
Tcl_GetIntFromObj(interp,objv[1],&whichChan);
//qDebug() << "Channel is $whichChan";
QString out =Tcl_GetStringFromObj(objv[2],&length);
QColor textColor;
if (whichChan==1)
textColor = QColor(Qt::red);
else
textColor = QColor(Qt::white);
console->putData(out.toUtf8(),textColor);
//qDebug() << out;
return TCL_OK;
}
Text forwarded from stderr gets colored red and text from stdout gets colored white.
And, as I mentioned above, each subsequent command that gets executed via Tcl_Eval needs to have the Tcl_Eval return value processed something like this:
if (rtn != TCL_OK)
{
QString output = Tcl_GetVar(interp, "errorInfo", TCL_GLOBAL_ONLY);
console->putData(output.toUtf8(),QColor(Qt::red));
//qDebug("Failed Tcl_Eval: %d \n%s\n", rtn,
}
To get what's normally printed to stderr by tclsh on a TCL_ERROR into the console (instead of the app's stderr).
I was planning to do the equivalent in C to eliminate the need to run Tcl code in the interpreter for the redirect. But, really there's no need for that.
The Tcl_Eval that does the redirection is done right after doing the Tcl_CreateInterp. Any subsequent Tcl_Evals using that interp will have stdout and stderr redirected to my application's console.
Besides, I'm having trouble understanding how to use Tcl_StackChannel and can't find an example I can follow.
Honestly, can't say that I completely understand the Tcl implementation. I made some assumptions on what gets passed to the proc used in the "chan push" command based on the referenced thread.
It looks like the proc is called with the list specified in the chan push command AND an args list. The first element of the args list is a name like "write" or "initialize". The third element looks like the string to be printed.
Still trying to find a definition of what's passed without having to dig into something like namespace ensemble.
So, it's likely that this Tcl code isn't the best implementation but it's working so far (with limited testing).

lua inheritance on existing object

I am writing a new constructor and I have something like this:
function Map:new(path, world, debug)
local map = sti(path, { "box2d" })
return map
end
function Map:update(dt)
print('call this')
end
sti is some thirdparty library that constructs a class object.
What I am trying to do is make it so when I call:
map:update(dt)
it calls the functions I have declared. If not found, it calls the actual function that sti sets up on the object.
I've tried stuffing around with metatables but can't seem to get my functions to take priority over the third party library supplied functions....
Reading the source code for what I believe is the library you're using (Simple-Tiled-Implementation), I figured out it actually overrides your metatable with another one:
local function new(map, plugins, ox, oy)
local dir = ""
if type(map) == "table" then
map = setmetatable(map, Map) -- Here
else
-- Check for valid map type
local ext = map:sub(-4, -1)
assert(ext == ".lua", string.format(
"Invalid file type: %s. File must be of type: lua.",
ext
))
-- Get directory of map
dir = map:reverse():find("[/\\]") or ""
if dir ~= "" then
dir = map:sub(1, 1 + (#map - dir))
end
-- Load map
map = setmetatable(assert(love.filesystem.load(map))(), Map) -- Or here
end
map:init(dir, plugins, ox, oy)
return map
end
The function above is defined here
You'll need to pass a table argument as map instead of the path, there you can define update(), which will take precedence over the metatable provided by STI.
I believe you can copy the procedure STI does to load your map and provide it with a table containing the function you wish to define inside:
-- Check for valid map type
local ext = map:sub(-4, -1)
assert(ext == ".lua", string.format(
"Invalid file type: %s. File must be of type: lua.",
ext
))
-- Get directory of map
dir = map:reverse():find("[/\\]") or ""
if dir ~= "" then
dir = map:sub(1, 1 + (#map - dir))
end
-- Load map
local map = assert(love.filesystem.load(map))()
function map:update()
-- Do things
end
sti(map, { "box2d" })
unfortunately sti declares 'local dir' at the top of the function so copying the code did not work.
I found a solution how ever I have made myself some way to easily set a class as a proxy in lua:
-- forward a function call from oldSelf:fn(...) to newSelf:fn(...)
function utils.forwardFunc(fn, newSelf)
return function(oldSelf, ...)
local function __NULL__() end
return (fn or __NULL__)(newSelf, ...)
end
end
-- make a function fn(...) call newSelf:fn(...)
function utils.func(fn, newSelf)
return function(...)
local function __NULL__() end
return (fn or __NULL__)(newSelf, ...)
end
end
-- forward any undefined functions called on 'from' to 'to'
-- if 'to' is a function, it acts as a dynamic proxy, incase you are changing what class you are proxying to
-- on the fly. For example, a state machine proxies to the current state
function utils.proxyClass(from, to)
local mt = getmetatable(from)
setmetatable(from, {__index = function(_, func)
if mt and mt[func] then
return mt[func]
end
local forwardTo = to
if type(to) == 'function' then
forwardTo = to(from)
end
if type(forwardTo[func]) == 'function' then
return utils.forwardFunc(forwardTo[func], forwardTo)
else
return forwardTo[func]
end
end})
end

Pre-compile textual replacement macro with arguments

I am trying to create some kind of a dut_error wrapper. Something that will take some arguments and construct them in a specific way to a dut_error.
I can't use a method to replace the calls to dut_error because to my understanding after check that ... then ... else can only come a dut_error (or dut_errorf). And indeed if I try to do something like:
my_dut_error(arg1: string, arg2: string) is {
dut_error("first argument is ", arg, " and second argument is ", arg2);
};
check that FALSE else my_dut_error("check1", "check2");
I get an error:
*** Error: Unrecognized exp
[Unrecognized expression 'FALSE else my_dut_error("check1", "check2")']
at line x in main.e
check that FALSE else my_dut_error("check1", "check2");
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So I thought about defining a macro to simply do a textual replace from my wrapper to an actual dut_error:
define <my_dut_error'exp> "my_dut_error(<arg1'name>, <arg2'name>)" as {
dut_error("first argument is ", <arg1'name>, " and second argument is ", <arg2'name>)
};
But got the same error.
Then I read about the preprocessor directive #define so tried:
#define my_dut_error(arg1, arg2) dut_error("first argument is ", arg, " and second argument is ", arg2)
But that just gave a syntax error.
How can I define a pre-compiled textual replacement macro that takes arguments, similar to C?
The reason I want to do that is to achieve some sort of an "interface" to the dut_error so all errors have a consistent structure. This way, different people writing different errors will only pass the arguments necessary by that interface and internally an appropriate message will be created.
not sure i understood what you want to do in the wrapper, but perhaps you can achieve what you want by using the dut_error_struct.
it has set of api, which you can use as hooks (do something when the error is caught) and to query about the specific error.
for example:
extend dut_error_struct {
pre_error() is also {
if source_method_name() == "post_generate" and
source_struct() is a BLUE packet {
out("\nProblem in generation? ", source_location());
// do something for error during generation
};
write() is first {
if get_mesage() ~ "AHB Error..." {
ahb_monitor::increase_errors();
};
};
};
dut_error accepts one parameter, one string. but you can decide of a "separator", that will define two parts to the message.
e.g. - instruct people to write "XXX" in the message, before "first arg" and "second arg".
check that legal else dut_error("ONE thing", "XXX", "another thing");
check that x < 7 else dut_error("failure ", "XXX", "of x not 7 but is ", x);
extend dut_error_struct {
write() is first {
var message_parts := str_split(get_message(), "XXX");
if message_parts.size() == 2 {
out ("First part of message is ", message_parts[0],
"\nand second part of message is ", message_parts[1]
);
};
};
I could get pretty close to what I want using the dut_errorf method combined with a preprocessor directive defining the format string:
#define DUT_FORMAT "first argument is %s and second argument is %s"
check that FALSE else dut_errorf(DUT_FORMAT, "check1", "check2");
but I would still prefer a way that doesn't require this DUT_FORMAT directive and instead uses dut_error_struct or something similar.

Creating a numpy python string array with pybind11

I am trying to modify a numpy string array from c++ with pybind11. The code i am using has the following structure:
py::array_t<py::str> process_array(py::array_t<py::str> input);
PYBIND11_EMBEDDED_MODULE(fast_calc, m) {
m.def("process_array", process_array);
}
py::array_t<py::str> process_array(py::array_t<py::str> input) {
auto buf = input.request();
cout << &buf;
return input;
}
The problem i face is this error message:
pybind11/numpy.h:1114:19: error: static assertion failed: Attempt to use a non-POD or unimplemented POD type as a numpy dtype
static_assert(is_pod_struct::value, "Attempt to use a non-POD or unimplemented POD type as a numpy dtype");
Not sure whats the catch. In python you can create numpy string arrays so what am i doing wrong?
Thanks.
Fixed length strings are supported in pybind11 (tested on v2.2.3, CentOS7, python3.6.5) by using the pybind11::array_t< std::array<char, N> > or char[N] type. Likely you'll want to pad out the string with null values just in case, as the standard pitfalls of C-style strings apply (e.g. N-1 useable characters). I prefer working with std::array as it doesn't decay to a char* without calling .data() making your intentions clearer to other readers.
So some psuedocode would look like this for a vector of 16 byte strings:
using np_str_t = std::array<char, 16>;
pybind11::array_t<np_str_t> cstring_array(vector.size());
np_str_t* array_of_cstr_ptr = reinterpret_cast<np_str_t*>(cstring_array.request().ptr);
for(const auto & s : vector)
{
std::strncpy(array_of_cstr_ptr->data(), s.data(), array_of_cstr_ptr->size());
array_of_cstr_ptr++;
}
return cstring_array; //numpy array back to python code
And then in python:
array([b'ABC', b'XYZ'], dtype='|S16')

How can I get the name of procedure in Nim?

I am trying to write a macro for debug print in the Nim language.
Currently this macro adds filename andline to the output by instantiationInfo().
import macros
macro debugPrint(msg: untyped): typed =
result = quote do:
let pos = instantiationInfo()
echo pos.filename, ":", pos.line, ": ", `msg`
proc hello() =
debugPrint "foo bar"
hello()
currently output:
debug_print.nim:9: foo bar
I would like to add the name of the procedure (or iterator) of the place where the macro was called.
desired output:
debug_print.nim:9(proc hello): foo bar
How can I get the name of procedure (or iterator) in Nim, like __func__ in C?
At runtime you can do getFrame().procname, but it only works with stacktrace enabled (not in release builds).
At compile-time surprisingly I can't find a way to do it. There is callsite() in macros module, but it doesn't go far enough. It sounds like something that might fit into the macros.LineInfo object.
A hacky solution would be to also use __func__ and parse that back into the Nim proc name:
template procName: string =
var name: cstring
{.emit: "`name` = __func__;".}
($name).rsplit('_', 1)[0]
building on answer from #def- but making it more robust to handle edge cases of functions containing underscores, and hashes containing trailing _N or not
also using more unique names as otherwise macro would fail if proc defines a variable name
import strutils
proc procNameAux*(name:cstring): string =
let temp=($name).rsplit('_', 2)
#CHECKME: IMPROVE; the magic '4' chosen to be enough for most cases
# EG: bar_baz_9c8JPzPvtM9azO6OB23bjc3Q_3
if temp.len>=3 and temp[2].len < 4:
($name).rsplit('_', 2)[0]
else:
# EG: foo_9c8JPzPvtM9azO6OB23bjc3Q
($name).rsplit('_', 1)[0]
template procName*: string =
var name2: cstring
{.emit: "`name2` = __func__;".}
procNameAux(name2)
proc foo_bar()=
echo procName # prints foo_bar
foo_bar()
NOTE: this still has some issues that trigger in complex edge cases, see https://github.com/nim-lang/Nim/issues/8212