Can we get the Coverity report specific to only one issue like URL Manipulation Error? - rest

I am using cov-capture, and cov-analyze to get the reports in my VM. Can anyone help in getting the command to run the cov-analyze only for getting specific errors? Example: There are various XML files created and analysis takes time to run. So to save time If we can get only a single report for a single issue like URL Manipulation or Encryption Error.
Note: Tool Used in Synopsys with REST API code in python and flask.

To run the analysis with only a single checker enabled, use the --disable-default and --enable options like this:
$ cov-analyze --disable-default --enable CHECKER_NAME ...
CHECKER_NAME is the all-caps, identifier-like name of the checker that reports issues of a certain type. For URL Manipulation, the checker is called PATH_MANIPULATION. The Checker Reference lists all of the checker names.
However, be aware that doing this repeatedly for each checker will take significantly longer than simply running all desired checkers at once because there is substantial overhead involved in simply reading the program into memory for analysis.
If your goal is faster analysis turnaround for changes you are making during development before check-in or push, you may want to look into using the cov-run-desktop command, which is meant for that use case.

Related

What is the practical difference between a sub-workflow and the includes directive? [Snakemake]

In the Snakemake documentation, the includes directive can incorporate all of the rules of another workflow into the main workflow and apparently can show up in snakemake --dag -n | dot -Tsvg > dag.svg. Sub-workflows, on the other hand, can be executed prior to the main workflow should you develop rules which depend on their output.
My question is: how are these two really different? Right now, I am working on a workflow, and it seems like I can get by on just using includes and putting the name of the output in rule all of the main workflow. I could probably even place the output in the input of a main-workflow rule, making the includes workflow execute prior to that rule. Additionally, I can't visualize a DAG which includes the sub-workflow, for whatever reason. What do sub-workflows offer that the includes directive can't do?
The include doesn't "incorporate another workflow". It just adds the rules from another file, like if you add them with copy/paste (with a minor difference that include doesn't affect your target rule). The subworkflow has an isolated set of rules that work together to produce the final target file of this subworkflow. So it is well structured and isolated from both main workflow and other subworkflows.
Anyway, my personal experience shows that there are some bugs in Snakemake that make using subworkflows quite difficult. Including the file is pretty straightforward and easy.
I've never used subworkflows, but here's a case where it may be more convenient to use them rather than the include directives. (In theory, I think you don't need include and subworkflow as you could write everything in a massive Snakefile, the point is more about convenience.)
Imagine you are writing a workflow that depends on result files from a published work (or from a previous project of yours). The authors did not make public the files you need but they provide a snakemake workflow to produce them. Their snakemake workflow may be quite complex and the files you need may be just intermediate steps. So instead of making sense of the all workflow and parsing it into your own include directives, you use subworkflow to generate the required file(s). E.g.:
subworkflow jones_etal:
workdir:
"./jones_etal"
snakefile:
"./jones_etal/Snakefile"
rule all:
input:
'my_results.txt',
rule one:
input:
jones_etal('from_jones.txt'),
output:
'my_results.txt',
...

Possibility of an LLVM LTO Pass plugin?

I was wondering if it's currently possible to have an 'external' (.so/.dylib) LLVM plugin (module) pass scheduled at LTO time? The reason for wanting this is a inter-modular optimization I want to add.
I also found this topic; How to write a custom intermodular pass in LLVM?
But a separate tool is not an option for me.
Thanks
I think the most helpful thing here might be to understand how passes are run and what the state of the code is during LTO.
First of all, when optimization passes are run by the compiler, they are done as a set that has been added to a PassManager. This means that LLVM/Clang, when passed something like -O3 will create a copy of a PassManager and subsequently provide it the set of passes expected to provide O3 level of optimization. This is very different from what you are doing with an external library which must be provided manually and cannot be fit into the pass pipeline normally.
Then we have the state of things when doing LTO. During Link Time Optimization, all of the individual translation units have been consolidated and are now a single Module. This means that an optimization which runs on each function will run on every function in the code base. Similarly, a per-module optimization will run on the full Module and therefor offer Inter-Procedural Analysis/Optimization.
If you're looking to use an Intra-Modular Pass then there is no reason to do this at LTO time and instead you can simply make a ModulePass and run that on each unit.

COBOL -> COBOL/DB2 -> COBOL -> COBOL/DB2 pgm call

Lets say like PGM1(cobol) calls--> PGM2(cobol-db2) calls--> PGM3(cobol)--> calls PGM4(cobol-db2).
1Q. PGM3 is modified, which is purely COBOL progam. Do we compile only PGM3 and promote it to production or should we do a BIND again as its being called by and calls cobol-db2 program.
2Q. If PGM4 is modifieed, then what has to be done. (I'm using PACKAGE -> PLAN concept) ?
Also, can any one please explain me bind with package concept when we have cobol-cobol/db2 call.
Ashok,
Its definitely a question of how you making calls.A call can be static and dynamic.
With Dynamic call you do not need to compile main program is sub program changes.
But with Static call you need to compile Main program too.
Ans1 :- Static call in all calls - yes you must compile all programs.
Dynamic call used - just compile sub program.
Ans 2 :- See full details below for package and plan concept.
If you bound the old versions of DBRMs directly into your plan,
· Identify all the DBRMs that are bound directly into that plan for both the changed programs and any unchanged programs, and bind them all into the plan again.
·While you are binding the DBRMs into the plan, applications cannot use the plan to access DB2.
If you bound the old versions of the DBRMs for the changed application programs into packages
·You do not need to bind any other packages or directly-bound DBRMs into the plan again.
·You simply bind the new versions of the DBRMs for the changed application programs into packages with the same names as the old versions.
·You do not need to bind the plan again--it locates the new versions of the packages.
·While you are changing the packages, application programs can still use the other packages and directly-bound DBRMs in the plan.
Hope this helps!!.
As a rule of thumb, if the "consistency token" changes, you should rebind. That is say, if a new DBRM is produced. Draw a picture. It will help. Linking is really a red herring here. If you don't know what a consistency token is you will after your -805. Ask a peer for help (in the first instance).
Also ask you peers about impact analysis. (What else am I not recompiling that I should ?).
If the subroutine contains static SQL statements then it will produce a DBRM when compiled. This changes the consistency token and thus requires the module to be rebound to the database to avoid an 818 consistency token error. If the subroutine contains no SQL then it does not ever need to be bound to the database because no DBRM is ever created for it.
Even a program that contains only dynamic SQL will still create a DBRM that must be bound to the database. The DBRM itself will be pretty much empty apart from the consistency token.
This holds true regardless of whether this is mainframe COBOL or distributed COBOL using DB2 or LUW.
It's been a while since I had to write any COBOL but we always had two relevant rule of thumbs.
Only use Static Calls - your code should be performance tested and if there is no need for a dynamic call for a very specific purpose avoid it at all costs.
Rebind everything when something is changed and check the access paths created PRIOR to putting it live
If you need to wait for a period of outage to complete the task and flip in the updated code in production I would be patient and plan one in and complete the bind then...or get a DBA to do it and get them to confirm it was successful in your outage window or roll back immediately.
If your development environment is sufficiently sophisticated complete the bind in a lower pre-production environment using the stats for the DB2 tables from production (Copy the data in if you can - or get a DBA to do it). And check that none of the access paths for any of the DB2 calls have changed.
Hope this helps
First use this DB2 SQL to get the CONTOKENs
SELECT SUBSTR(COLLID,1,12) AS COLLID ,
SUBSTR(NAME,1,8) AS NAME ,
HEX(CONTOKEN) AS CONTOKEN ,
SUBSTR(OWNER,1,8) AS OWNER ,
SUBSTR(CREATOR,1,8) AS CREATOR ,
PDSNAME ,BINDTIME
FROM SYSIBM.SYSPACKAGE
WHERE NAME= 'program name';
Get the DB2 CONTOKEN (example below)
1ADB70E30768F694
0768F6941ADB70E3 (then reversed contoken 4bytes+4bytes)
Check #1 use reversed token search
Use token 0768F6941ADB70E3 (reversed)
CONTROL.???????.CICSLIB
Should be found
Check #2 use non-reversed token into DBRMLIB
CONTROL.????????.CIC.DBRMLIB
-Use token 1ADB70E30768F694
Should be found
If found then your bind is good.

Filtering the directory out with Devel::Cover

I wanted to get coverage of my Perl based application in CentOS with apache web server, and went for the Devel::Cover to get it done. After some initial struggles, I got it installed. Since the PERL5OPT env variable did not help me in getting the coverage, I tried to include use Devel::Cover inside the code (I know it's a bad idea, but it serves my purpose) . The cover_db is generating its run/structures after I restart the webserver, but the data seems to be having transactions done with the CPAN generic modules as well, which reduce the total coverage score.
For example: if I use a single method from Net::FTP, it reduces the total score by considering the total number of lines in that module. Likewise for all the modules from CPAN.
What I need is the ability to select the files from a specific directory for coverage and ignore all the rest. From the description, it seems the +inc and -inc options are designed for this, but when I try to use them, I get the following error
Unknown option: inc
I would like to know couple of things.
After cover_db is updated with the transactions, is it possible to filter out from it using cover -options during generation of report?
Is there any other way I could get the coverage of only specific paths?
Appreciate the response.

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.