threadx module allocating module data section in run time - global

I see that when loading threadx module, the data section is allocated in run time from byte pool (at _txm_module_manager_internal_load).
Is there a way to define in advance (in the module preamble or such) the place where the module data will be located?
How dose the module code knows where its globals are located? the compiler can't know their addresses because the place of the data is determined at run time (I suppose somewhere in the assembly porting there is some relative address to the data section that changes for each module that the globals are accessed through it, but I am not so familiar with arm assembly to find it by myself).

The location of data for each module is determined at run-time. This is to allow loading the same module more than once, each instance having separate data, in different locations.
Globals are located in the data area. The modules are compiled as position independent code and position independent data. While loading the module the module startup code is provided with the load address of the data. The specifics of how this is achieved are different for each architecture and compiler.

#Andrés has answered correctly, but I wanted to add some info. You must build a Module with position independent code and data (e.g. ROPI and RWPI options if using the ARM compilers). The GCC options look like below (I bolded the relevant compiler options needed for position independence):
arm-none-eabi-gcc -c -g -mcpu=cortex-m4 **-fpie -fno-plt -mpic-data-is-text-relative -msingle-pic-base** txm_module_preamble.s
Let us know if you have further questions.

Related

Disable the instance of DUT from Test-bench

Please help to resolve one issue that I am facing connected with disabling DUT instanced.
My DUT top module has many instances in it, but my test does not need them.
Is there any way to disable these instances from test-bench.
For example this is my DUT module prototype:
module top (…….);
// instances needs to be disabled
module1 #(16) inst1 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(global_reset_n));
module2 #(16) inst2 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(pcie_reset_n));
pcie_module #(…) inst_pci (…..);
// main test target instances
target_testmodule #(…) test_inst(…);
child1_of_target_testmodule #(…) test_inst_child1(…);
child2_of_target_testmodule #(…) test_inst_child2(…);
endmodule
so my test-bench will only test the target_testmodule and its child modules.
I am using bind to connect the interface to target_testmodule and then starting to drive the pins of target_testmodule. And the target_testmodule drives its child module pins.
So for this test I don’t need pci_module instace or other instaces, because they are big instances take much time, provides lots of warning and also they drive some of the target_testmodule ports which I don’t neet.
My question is there some mechanism to disable the pci module from the test-bench. I don’t have write permission to top module to comment the instances or put them inside `ifdefs.
Your first mechanism is to ask the person who locked the file to change it so you can get your job done more efficiently. They can put in generate or ifdef statements for you.
If you had separate clock or enables signals, you could force them to an inactive state
copy and modify a local copy of the top-level file and have that file used instead. The are a number of ways to substitute the local module
Beyond getting write permission, the next easiest way would be to make you own top.
Verilog (since IEEE1364-2001) and SystemVerilog do have a way to compile different modules of the same name into different libraries, then use a configuration to decided which one will be used during elaboration. You could use this technique to use swap the module instances you don't want with simplified or dummy version. Depending your testing environment is configured, implementing this configurations can be tricky. If you are up for the challenge, then read IEEE Std 1800-2012 § 33. Configuring the contents of a design

More than one V4L-DVB driver on the same host machine

I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave

Is there something like a "pre-build" callback function?

I have a Simulink model, the purpose of which is automated code generation.
My model uses S-functions (developed by another party), which has hard-coded assumptions about the path. For instance, several external data files are needed, which are referenced in the S-function via a relative path like ..\Bin\data\datafile.bin. This makes it necessary to set MATLAB's current working directory to a specific path before the model can be run.
I can automatically check and set the correct path via model callback functions. However, all model callback functions only seem to be related to the simulation process, not the build process. That means that I can run the model irrespective of what directory I'm in, but when I try to build the model, it always fails unless I manually navigate MATLAB back to the correct directory.
Needless to say, that's quite annoying. So I was wondering if there is something like a "preBuildFcn" callback fnuction, a function that is run before starting the build process? Any other solution (that does not involve modifying the S-function) is also very welcome.
There are plenty of hooks into the build process of Simulink / Embedded Coder ('entry', 'before_tlc', 'after_tlc', 'before_make', 'after_make', 'exit', and 'error'). I assume you want an 'entry' hook.
All you need to do is write an M-function with the name your_system_target_file name_make_rtw_hook, as explained in the documentation Customize Build Process with STF_make_rtw_hook File.
In case you can't open the online documentation (login required), here is the path to the HTML in your MATLAB installation: MATLAB root\help\rtw\ug\customizing-the-target-build-process-with-the-stf-make-rtw-hook-file.html
I am not sure whether building simulink models is sufficiently similar to building regular MATLAB programs, but here is what I used in the past:
Set up the project manually
Build the project programmatically
The program that is used to build the project should be able to set the path or do other custom things.

Compiling Simulink Code into .ELF object form

I have a simple model from simulink and I would like to generate code using the code generator in the simulink and then compile it using gcc into a .ELF object file. How can I proceed?
Thanks
You need the product called Simulink Coder (around matlab 2011b) or Real-time Workshop (for older matlab versions). Typing ver at the matlab command window will show what products and licences you have installed.
If Simulink Coder or RTW are installed, you use the menu Simulation->Configuration Parameters to set up the model for code generation.
If you have Embedded Coder you can set System Target File to ert.tlc, and this will produce a very concise main() routine to call your model code. Otherwise, use grt.tlc which produces a lot more bloat then ert, but is the only useful one available for on Windows.
There are a lot of options to go through and check - it really needs someone with a bit of experience to be present!
As you are requesting an ELF file, is this for an embedded system? If so, there is a lot more work to be done. If the target is not one of the already supported targets, then you need a target package, which will take either a lot of time and experience, or money to buy one.
Custom target development - a world of it's own:
http://www.mathworks.co.uk/help/toolbox/rtw/ug/bse3b2z.html

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.