Is there any ways to support usdt probes(User-level Statically Defined Tracing) without recompiling? - trace

I want to trace mysql query event using ebpf tools such as bcc, bpftrace and perf. And I find that we need to compile the application ourself using a --with-dtrace flag to support USDT events. And how does usdt works. Is there other way to use usdt without recompiling the application?

You can use dynamic tracepoints with perf probe. These tracepoints can be defined in any executable or shared library as well as the kernel.
For example:
./perf probe -x /path/to/exectuable function_name
They are then available as tracepoints for perf and friends. You can also add function parameters or local variables if the optimization level permits.

Related

Can we get the Coverity report specific to only one issue like URL Manipulation Error?

I am using cov-capture, and cov-analyze to get the reports in my VM. Can anyone help in getting the command to run the cov-analyze only for getting specific errors? Example: There are various XML files created and analysis takes time to run. So to save time If we can get only a single report for a single issue like URL Manipulation or Encryption Error.
Note: Tool Used in Synopsys with REST API code in python and flask.
To run the analysis with only a single checker enabled, use the --disable-default and --enable options like this:
$ cov-analyze --disable-default --enable CHECKER_NAME ...
CHECKER_NAME is the all-caps, identifier-like name of the checker that reports issues of a certain type. For URL Manipulation, the checker is called PATH_MANIPULATION. The Checker Reference lists all of the checker names.
However, be aware that doing this repeatedly for each checker will take significantly longer than simply running all desired checkers at once because there is substantial overhead involved in simply reading the program into memory for analysis.
If your goal is faster analysis turnaround for changes you are making during development before check-in or push, you may want to look into using the cov-run-desktop command, which is meant for that use case.

Disable the instance of DUT from Test-bench

Please help to resolve one issue that I am facing connected with disabling DUT instanced.
My DUT top module has many instances in it, but my test does not need them.
Is there any way to disable these instances from test-bench.
For example this is my DUT module prototype:
module top (…….);
// instances needs to be disabled
module1 #(16) inst1 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(global_reset_n));
module2 #(16) inst2 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(pcie_reset_n));
pcie_module #(…) inst_pci (…..);
// main test target instances
target_testmodule #(…) test_inst(…);
child1_of_target_testmodule #(…) test_inst_child1(…);
child2_of_target_testmodule #(…) test_inst_child2(…);
endmodule
so my test-bench will only test the target_testmodule and its child modules.
I am using bind to connect the interface to target_testmodule and then starting to drive the pins of target_testmodule. And the target_testmodule drives its child module pins.
So for this test I don’t need pci_module instace or other instaces, because they are big instances take much time, provides lots of warning and also they drive some of the target_testmodule ports which I don’t neet.
My question is there some mechanism to disable the pci module from the test-bench. I don’t have write permission to top module to comment the instances or put them inside `ifdefs.
Your first mechanism is to ask the person who locked the file to change it so you can get your job done more efficiently. They can put in generate or ifdef statements for you.
If you had separate clock or enables signals, you could force them to an inactive state
copy and modify a local copy of the top-level file and have that file used instead. The are a number of ways to substitute the local module
Beyond getting write permission, the next easiest way would be to make you own top.
Verilog (since IEEE1364-2001) and SystemVerilog do have a way to compile different modules of the same name into different libraries, then use a configuration to decided which one will be used during elaboration. You could use this technique to use swap the module instances you don't want with simplified or dummy version. Depending your testing environment is configured, implementing this configurations can be tricky. If you are up for the challenge, then read IEEE Std 1800-2012 § 33. Configuring the contents of a design

More than one V4L-DVB driver on the same host machine

I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave

Possibility of an LLVM LTO Pass plugin?

I was wondering if it's currently possible to have an 'external' (.so/.dylib) LLVM plugin (module) pass scheduled at LTO time? The reason for wanting this is a inter-modular optimization I want to add.
I also found this topic; How to write a custom intermodular pass in LLVM?
But a separate tool is not an option for me.
Thanks
I think the most helpful thing here might be to understand how passes are run and what the state of the code is during LTO.
First of all, when optimization passes are run by the compiler, they are done as a set that has been added to a PassManager. This means that LLVM/Clang, when passed something like -O3 will create a copy of a PassManager and subsequently provide it the set of passes expected to provide O3 level of optimization. This is very different from what you are doing with an external library which must be provided manually and cannot be fit into the pass pipeline normally.
Then we have the state of things when doing LTO. During Link Time Optimization, all of the individual translation units have been consolidated and are now a single Module. This means that an optimization which runs on each function will run on every function in the code base. Similarly, a per-module optimization will run on the full Module and therefor offer Inter-Procedural Analysis/Optimization.
If you're looking to use an Intra-Modular Pass then there is no reason to do this at LTO time and instead you can simply make a ModulePass and run that on each unit.

COBOL -> COBOL/DB2 -> COBOL -> COBOL/DB2 pgm call

Lets say like PGM1(cobol) calls--> PGM2(cobol-db2) calls--> PGM3(cobol)--> calls PGM4(cobol-db2).
1Q. PGM3 is modified, which is purely COBOL progam. Do we compile only PGM3 and promote it to production or should we do a BIND again as its being called by and calls cobol-db2 program.
2Q. If PGM4 is modifieed, then what has to be done. (I'm using PACKAGE -> PLAN concept) ?
Also, can any one please explain me bind with package concept when we have cobol-cobol/db2 call.
Ashok,
Its definitely a question of how you making calls.A call can be static and dynamic.
With Dynamic call you do not need to compile main program is sub program changes.
But with Static call you need to compile Main program too.
Ans1 :- Static call in all calls - yes you must compile all programs.
Dynamic call used - just compile sub program.
Ans 2 :- See full details below for package and plan concept.
If you bound the old versions of DBRMs directly into your plan,
· Identify all the DBRMs that are bound directly into that plan for both the changed programs and any unchanged programs, and bind them all into the plan again.
·While you are binding the DBRMs into the plan, applications cannot use the plan to access DB2.
If you bound the old versions of the DBRMs for the changed application programs into packages
·You do not need to bind any other packages or directly-bound DBRMs into the plan again.
·You simply bind the new versions of the DBRMs for the changed application programs into packages with the same names as the old versions.
·You do not need to bind the plan again--it locates the new versions of the packages.
·While you are changing the packages, application programs can still use the other packages and directly-bound DBRMs in the plan.
Hope this helps!!.
As a rule of thumb, if the "consistency token" changes, you should rebind. That is say, if a new DBRM is produced. Draw a picture. It will help. Linking is really a red herring here. If you don't know what a consistency token is you will after your -805. Ask a peer for help (in the first instance).
Also ask you peers about impact analysis. (What else am I not recompiling that I should ?).
If the subroutine contains static SQL statements then it will produce a DBRM when compiled. This changes the consistency token and thus requires the module to be rebound to the database to avoid an 818 consistency token error. If the subroutine contains no SQL then it does not ever need to be bound to the database because no DBRM is ever created for it.
Even a program that contains only dynamic SQL will still create a DBRM that must be bound to the database. The DBRM itself will be pretty much empty apart from the consistency token.
This holds true regardless of whether this is mainframe COBOL or distributed COBOL using DB2 or LUW.
It's been a while since I had to write any COBOL but we always had two relevant rule of thumbs.
Only use Static Calls - your code should be performance tested and if there is no need for a dynamic call for a very specific purpose avoid it at all costs.
Rebind everything when something is changed and check the access paths created PRIOR to putting it live
If you need to wait for a period of outage to complete the task and flip in the updated code in production I would be patient and plan one in and complete the bind then...or get a DBA to do it and get them to confirm it was successful in your outage window or roll back immediately.
If your development environment is sufficiently sophisticated complete the bind in a lower pre-production environment using the stats for the DB2 tables from production (Copy the data in if you can - or get a DBA to do it). And check that none of the access paths for any of the DB2 calls have changed.
Hope this helps
First use this DB2 SQL to get the CONTOKENs
SELECT SUBSTR(COLLID,1,12) AS COLLID ,
SUBSTR(NAME,1,8) AS NAME ,
HEX(CONTOKEN) AS CONTOKEN ,
SUBSTR(OWNER,1,8) AS OWNER ,
SUBSTR(CREATOR,1,8) AS CREATOR ,
PDSNAME ,BINDTIME
FROM SYSIBM.SYSPACKAGE
WHERE NAME= 'program name';
Get the DB2 CONTOKEN (example below)
1ADB70E30768F694
0768F6941ADB70E3 (then reversed contoken 4bytes+4bytes)
Check #1 use reversed token search
Use token 0768F6941ADB70E3 (reversed)
CONTROL.???????.CICSLIB
Should be found
Check #2 use non-reversed token into DBRMLIB
CONTROL.????????.CIC.DBRMLIB
-Use token 1ADB70E30768F694
Should be found
If found then your bind is good.