How do I access List template of C++ program from Perl using SWIG? - perl

I want to access a template List of C++ program from a Perl script and use those values.
Example code:
typedef list < Struct1 * > sturct1_list;
struct Struct2
{
int i;
struct1_list List1;
}
struct Struct1
{
int j;
}
I used one swig generated api and did the following:
$myList = Struct2_struct1List_get
print "Reference type: " . ref($myList) ;
now this prints as:
Reference type: \_p\_std\_\_listTutils\_\_Struct1\_p\_t
how to get the values from the structure using this?
Update from duplicate question:
in interface file i put
%template(ListStruct1) std::list< Struct1 * >;
after i generate the ".pm" file. I checked the APIs available this list.
I found
ListStuct1_size
ListStuct1_empty
ListStuct1_clear
ListStuct1_push.
I was able to use those elements. But i dont know how to access individual elements of the list using these API? or am I missing something in interface file?
UPDATED:
Is typemap possible to return the list as array here??

First of all, general info
This tutorial shows how to do the wrapper for templates.
The same tutorial shows how to use the module from Perl, but the perl example doesn't touch templates.
This SO article shows how to do that with a Vector
Here's a general SWIG STL documentation that seems to mention std_list.i interface.
Second, regarding lists
You can not "access" C++ list like a Perl array, by a subscript. If you wanted that, you must use a Vector as underlying type.
As an alternate, create a class extending List, give it a new method which returns an element by an index, and expose that method in an interface.
If you wish to access the list by finding an element, like in C++, you need to write a List interface that exposes find() method - the default one does not from reading the source code.

In your interface, try:
%include "std_list.i"
%template(ListStruct1) std::list< Struct1 * >;
The std library is kinda funny, there's no actual binary object called list that swig can just wrap, it's all templates - so swig needs some extra help to figure out what's going on.
That should add insert, remove, and a bunch of other list specific functions to the wrapper.
If the above doesn't work, try adding:
%define SWIG_EXPORT_ITERATOR_METHODS
UPDATE: Of course, I neglected to mention (or even realize) that this works great for python, java, and a few others, but is totally broken in perl...

Related

Extract Types/Classnames from flat Modelica code

I was wondering if there already exists a possibility to extract from flat Modelica code all variables AND their corresponding types (classnames respectively).
For example:
Given an extract from a flattened Modelica model:
constant Integer nSurfaces = 8;
constant Integer construction1.nLayers(min = 1.0) = 2 "Number of layers of the construction";
parameter Modelica.SIunits.Length construction1.thickness[construction1.nLayers]= {0.2, 0.1} "Thickness of each construction layer";
Here, the wanted output would be something like:
nSurfaces, Integer, constant;
construction1.nLayers, Integer, constant;
construction1.thickness[construction1.nLayers], Modelica.SIunits.Length, parameter
Ideally, for construction1.thickness there would be two lines (=number of construction1.nLayers).
I know, that it is possible to get a list of used variables from the dsin.txt, which is produced while translating a model. But until now I did not find an already existing way to get the corresponding types. And I really would like to avoid writing an own parser :-).
You could try to generate the file modelDescription.xml as defined by the FMI standard. It contains a ton of information and XML should be easier to parse, e.g. python has a couple of xml parsing/reading packages.
If you are using Dymola you just set the flag Advanced.FMI.GenerateModelDescriptionInterface2 = true to generate the model description file.
The second idea could be to let the compiler/tool parse the Modelica file for you as they need to do that anyway, try searching for AST (abstract syntax tree). In Dymola, this is available through the ModelManagement library, and also through the Python interface.
Third idea could be to use one of the Modelica parsers available, e.g. have a look at:
https://github.com/lbl-srg/modelica-json
https://hackage.haskell.org/package/modelicaparser
https://github.com/xie-dongping/modparc
https://github.com/pymoca/pymoca
https://github.com/pymola/pymola/tree/master/src/pymola
Fourth, if all that did not work, you still do not have to write a full parser, you could use ANTLR, then use an existing grammar file (look for e.g. modelica.g4).

In systemverilog how to provide commandline overrides for complex fields like associative array fields

Lets say i have added a associative array string,string field to the factory through the macro `ovm_field_aa_string_string macro. Is there a way to configure it from command line like we do with simple int fields like follows:
./simv ... +ovm_set_config_int=scope,name,value
is there something like
./simv ... +ovm_set_config_aa_string_string=scope,name,key=val,key2=val2
No, you can only set int's and strings from the command line. I strongly discourge the use of any `uvm_field macros because of their inability to deal with complex times, the the poor simulation performance they impose.
This seems to be already answered by someone else
Here is the link to the post
how to get array of values as plusargs in systemverilog?

Why some variable of struct take preprocessor to function?

Variables of struct declared by data type of language in the header file. Usually data type using to declare variables, but other data type pass to preprocessors. When we should use to a data type send to preprocessor for declare variables? Why data type and variables send to processor?
#define DECLARE_REFERENCE(type, name) \
union { type name; int64_t name##_; }
typedef struct _STRING
{
int32_t flags;
int32_t length;
DECLARE_REFERENCE(char*, identifier);
DECLARE_REFERENCE(uint8_t*, string);
DECLARE_REFERENCE(uint8_t*, mask);
DECLARE_REFERENCE(MATCH*, matches_list_head);
DECLARE_REFERENCE(MATCH*, matches_list_tail);
REGEXP re;
} STRING;
Why this code is doing this for declarations? Because as the body of DECLARE_REFERENCE shows, when a type and name are passed to this macro it does more than just the declaration - it builds something else out of the name as well, for some other unknown purpose. If you only wanted to declare a variable, you wouldn't do this - it does something distinct from simply declaring one variable.
What it actually does? The unions that the macro declares provide a second name for accessing the same space as a different type. In this case you can get at the references themselves, or also at an unconverted integer representation of their bit pattern. Assuming that int64_t is the same size as a pointer on the target, anyway.
Using a macro for this potentially serves several purposes I can think of off the bat:
Saves keystrokes
Makes the code more readable - but only to people who already know what the macros mean
If the secondary way of getting at reference data is only used for debugging purposes, it can be disabled easily for a release build, generating compiler errors on any surviving debug code
It enforces the secondary status of the access path, hiding it from people who just want to see what's contained in the struct and its formal interface
Should you do this? No. This does more than just declare variables, it also does something else, and that other thing is clearly specific to the gory internals of the rest of the containing program. Without seeing the rest of the program we may never fully understand the rest of what it does.
When you need to do something specific to the internals of your program, you'll (hopefully) know when it's time to invent your own thing-like-this (most likely never); but don't copy others.
So the overall lesson here is to identify places where people aren't writing in straightforward C, but are coding to their particular application, and to separate those two, and not take quirks from a specific program as guidelines for the language as a whole.
Sometimes it is necessary to have a number of declarations which are guaranteed to have some relationship to each other. Some simple kinds of relationships such as constants that need to be numbered consecutively can be handled using enum declarations, but some applications require more complex relationships that the compiler can't handle directly. For example, one might wish to have a set of enum values and a set of string literals and ensure that they remain in sync with each other. If one declares something like:
#define GENERATE_STATE_ENUM_LIST \
ENUM_LIST_ITEM(STATE_DEFAULT, "Default") \
ENUM_LIST_ITEM(STATE_INIT, "Initializing") \
ENUM_LIST_ITEM(STATE_READY, "Ready") \
ENUM_LIST_ITEM(STATE_SLEEPING, "Sleeping") \
ENUM_LIST_ITEM(STATE_REQ_SYNC, "Starting synchronization") \
// This line should be left blank except for this comment
Then code can use the GENERATE_STATE_ENUM_LIST macro both to declare an enum type and a string array, and ensure that even if items are added or removed from the list each string will match up with its proper enum value. By contrast, if the array and enum declarations were separate, adding a new state to one but not the other could cause the values to get "out of sync".
I'm not sure what the purpose the macros in your particular case, but the pattern can sometimes be a reasonable one. The biggest 'question' is whether it's better to (ab)use the C preprocessor so as to allow such relationships to be expressed in valid-but-ugly C code, or whether it would be better to use some other tool to take a list of states and would generate the appropriate C code from that.

Dynamic arg types for a python function when embedding

I am adding to Exim an embedded python interpreter. I have copied the embedded perl interface and expect python to work the same as the long-since-coded embedded perl interpreter. The goal is to allow the sysadmin to do complex functions in a powerful scripting language (i.e. python) instead of trying to use exim's standard ACL commands because it can get quite complex to do relatively simple things using the exim ACL language.
My current code as of the time of this writing is located at http://git.exim.org/users/tlyons/exim.git/blob/9b2c5e1427d3861a2154bba04ac9b1f2420908f7:/src/src/python.c . It is working properly in that it can import the sysadmin's custom python code, call functions in it, and handle the returned values (simple return types only: int, float, or string). However, it does not yet handle values that are passed to a python function, which is where my question begins.
Python seems to require that any args I pass to the embedded python function be explicitly cast to one of int,long,double,float or string using the c api. The problem is the sysadmin can put anything in that embedded python code and in the c side of things in exim, I won't know what those variable types are. I know that python is dynamically typed so I was hoping to maintain that compliance when passing values to the embedded code. But it's not working that way in my testing.
Using the following basic super-simple python code:
def dumb_add(a,b):
return a+b
...and the calling code from my exim ACL language is:
${python {dumb_add}{800}{100}}
In my c code below, reference counting is omitted for brevity. count is the number of args I'm passing:
pArgs = PyTuple_New(count);
for (i=0; i<count; ++i)
{
pValue = PyString_FromString((const char *)arg[i]);
PyTuple_SetItem(pArgs, i, pValue);
}
pReturn = PyObject_CallObject(pFunc, pArgs);
Yes, **arg is a pointer to an array of strings (two strings in this simple case). The problem is that the two values are treated as strings in the python code, so the result of that c code executing the embedded python is:
${python {dumb_add}{800}{100}}
800100
If I change the python to be:
def dumb_add(a,b):
return int(a)+int(b)
Then the result of that c code executing the python code is as expected:
${python {dumb_add}{800}{100}}
900
My goal is that I don't want to force a python user to manually cast all of the numeric parameters they pass to an embedded python function. Instead of PyString_FromString(), if there was a PyDynamicType_FromString(), I would be ecstatic. Exim's embedded perl parses the args and does the casting automatically, I was hoping for the same from the embedded python. Can anybody suggest if python can do this arg parsing to provide the dynamic typing I was expecting?
Or if I want to maintain that dynamic typing, is my only option going to be for me to parse each arg and guess at the type to cast it to? I was really really REALLY hoping to avoid that approach. If it comes to that, I may just document "All parameters passed are strings, so if you are actually trying to pass numbers, you must cast all parameters with int(), float(), double(), or long()". However, and there is always a comma after however, I feel that approach will sour strong python coders on my implementation. I want to avoid that too.
Any and all suggestions are appreciated, aside from "make your app into a python module".
The way I ended up solving this was by finding out how many args the function expected, and exit with an error if the number of args passed to the function didn't match. Rather than try and synthesize missing args or to simply omit extra args, for my use case I felt it was best to enforce matching arg counts.
The args are passed to this function as an unsigned char ** arg:
int count = 0;
/* Identify and call appropriate function */
pFunc = PyObject_GetAttrString(pModule, (const char *) name);
if (pFunc && PyCallable_Check(pFunc))
{
PyCodeObject *pFuncCode = (PyCodeObject *)PyFunction_GET_CODE(pFunc);
/* Should not fail if pFunc succeeded, but check to be thorough */
if (!pFuncCode)
{
*errstrp = string_sprintf("Can't check function arg count for %s",
name);
return NULL;
}
while(arg[count])
count++;
/* Sanity checking: Calling a python object requires to state number of
vars being passed, bail if it doesn't match function declaration. */
if (count != pFuncCode->co_argcount)
{
*errstrp = string_sprintf("Expected %d args to %s, was passed %d",
pFuncCode->co_argcount, name, count);
return NULL;
}
The string_sprintf is a function within the Exim source code which also handles memory allocation, making life easy for me.

what's handle and isa

I need an explanation with a simple examples about the following concepts:
'handle' function
#
+
isa
These are part of API that mathworks has devised.
A folder whose name begins with an # sign contains the definition of Matlab objects.
The folders with the + sign are similar, but they are used to define a "package" or namespace. These are hard to give an example of, but I would recommend looking at the help like gnovice says and looking through your matlab install folders for examples.
The isa() command takes an object and a string and tells you if the object is of the class descibed by the string.
A function handle, is just like a function pointer. You can pass the function handle around if you assign the function to a variable like:
myFuncRef = #isempty
Now you have a reference to the isempty() function which can be use like so:
myFuncRef(somevar)