No files emitted Warning - coverity

I have C/C++ application and I am trying to run cov-build and getting a warning “ NO FILES EMITTED”. Can you please help me as we doing the POC on Coverity for static code analysis.
C:\Users\Master\bamboo-agent-home\xml-data\build-dir\DEC-L11PROJ-JOB1>cov-build --dir cov-int IarBuild.exe MainApplication\EWARM\L11_P4_uC1.ewp -build *
Coverity Build Capture (64-bit) version 2019.03 on Windows 10 Enterprise, 64-bit (build 18362)
Internal version numbers: 2c0f9c8cf4 p-pacific1-push-35439.872
IAR Command Line Build Utility V8.4.8.6680
Copyright 2002-2020 IAR Systems AB.
Total number of errors: 0
Total number of warnings: 0
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
For more details, please look at:
C:/Users/Master/bamboo-agent-home/xml-data/build-dir/DEC-L11PROJ-JOB1/cov-int/build-log.txt

First, if you are involved in a pre-sales Proof of Concept (POC), then there should be a Coverity Sales Engineer assigned to help with the POC. That person's role includes providing instructions and information similar to what I'll offer below, as well as answering technical questions such as yours. There may have been a miscommunication somewhere. Get in contact with the Sales Engineer, as they will be able to help more reliably and completely than I can.
Now, what's going on? The primary purpose of cov-build is to watch the build process for invocations of compilers, and when one is found, compile the same code using the Coverity compiler (called cov-emit). But in order to recognize a compiler, cov-build needs to know its command line name, what kind of compiler it is, where its include files are stored, etc. This is accomplished by a helper tool called cov-configure that must be run before cov-build. If cov-configure has not been run, then no compiler invocations will be recognized, which appears to be the case for you, as indicated by "No files were emitted".
Synopsys has a page called CLI Integration Cheat sheet that gives these commands for use with IAR:
cov-configure --comptype iar:arm --compiler iccarm --template
cov-build --dir <intermediate directory> "c:\Program Files (x86)\IAR Systems\Embedded Workbench 6.5\common\bin\IarBuild.exe" sample_project.ewp -build Debug -log all
I can't personally vouch for these commands (I don't have IAR, nor access to the Coverity tools anymore; I'm a former employee), but something like that will be needed. Again, your assigned Sales Engineer should be able to help.
Finally, for new Coverity users, I recommend using the cov-wizard tool. cov-wizard is a graphical front-end to the command line tools, and has help text explaining the concepts and procedures, along with a convenient interface for performing them. There are several steps even after cov-build, and cov-wizard will walk you through all of them. Its final screen shows exactly what command lines it used in case you want to script them.

Related

"Error applying transforms. Verify that the specified transform path are valid." when uninstalling

I am trying to uninstall Crystal Report for Visual Studio 2011, and install Crystal Report for Visual Studio 2019. I got the error message of "Error applying transforms. Verify that the specified transform path are valid." when uninstalling the program. Therefore, the newer version of Crystal Report for VS 2019 can't be installed.
I searched around and some posts says about windows registry entries caused the problem. I can't find out a solution on what to do.
Highly appreciate your response.
Crystal Reports: I am wondering if you have installed from a network share that is no longer available, or one where the UNC path specified to your transform is blocked or in other ways incorrect. What you need is a proper log file. Please see below. And have a quick peek here: https://apps.support.sap.com/sap/support/knowledge/en/1220433
There are also some issues with secure transforms and complications arising from this and newer Windows settings and security features. Maybe have a quick look here. Just a few links on the topic.
How many machines do you see this on? Just your own?
Preparation: A couple of things first:
Reboot: Do a reboot before attempting the next uninstall. Just to have a clean slate. Allow system to settle after reboot (give it a couple of minutes to settle down).
Corrupt installation files: Re-download your new setup to make sure its installation file is not corrupted. Try to malware scan it too. And finally set it unblocked as shown here.
Admin Rights: Second, make sure you run with proper admin rights? Run the msiexec.exe command from an elevated command prompt. Please launch an elevated cmd.exe (right click => run as administrator).
Anti-Virus: Disable your anti-virus first to prevent any locks from failing your uninstall.
Debugging "Ideas Lists": Common causes of setup runtime issues
Logging: Now the most important. You must also ensure proper logging for the uninstall effort. You can either use logging by policy
or define it at the command line level. It would be best to enable the
logging policy so the log file is automatically created in the tmp
folder. Please see this answer for more on logging.
When you have done the "Preparation" above, please run the
uninstall and create a proper log file. Here is the command line
(prefer the policy):
msiexec.exe /x "mysetup.msi" /L*V "C:\Temp\msilog.log"
Please open the log you created (or get the log from the temp folder
if you have logging policy enabled). Then inspect the log and match
against this sample log here:
https://www.itninja.com/question/transform-issues
What do you see? Just read line by line and don't be intimidated by
all the "line noise".
Common Technical Issues: If you have problems with all other MSI packages and their operation, you could have a bigger problem. Then you should rule out some runtime issues. Note that some issues are commonly caused by malware (or just technicalities that occur randomly sometimes):
Visual C++ Runtime - reinstall it. There are many versions. See what your package needs. The latest supported Visual C++ downloads.
Unregister / re-register msiexec.exe (can be necessary because of malware or normal technical glitches).
Run chkdsk.exe and sfc.exe to check for file corruptions and corrupt OS files.
Microsoft FixIt: You can use the Microsoft FixIt method as a last resort to clean out your existing installation. This generally works (unless you have hacked too much already), but is not ideal: http://support.microsoft.com/mats/Program_Install_and_Uninstall/ - this approach does not clean up or uninstall, it just unregisters the installed package and leaves all its files and registry settings in place. You can try to install the new version, but some interference issues are likely with the garbage left behind. Yes, you can try to clean up manually, but I would just try to overwrite first.
Links:
What is the root cause of "Error Applying Transforms. Verify that the specified transform paths valid"?
All About the Four Types of MSI Transforms
"Error applying transforms. Verify that the specified transform paths are valid."

Code Coverage Visualization for Dart/Flutter (Specially for Windows and VS Code)

This was originally a Github Issue in the Dart-Code repository.
1. Context
I've been working on a package that has hundreds of tests, so an easy way of visualizing code coverage would be incredibly handy.
I would like to run my tests with, say, a .vscode configuration with an lcov.info output which would automatically be recognized by VS Code and highlighted on the respective editors with either red or green.
2. What I've Already Tried
I've tried many different solutions in the past few days — months actually — but none of them worked as the ideal one described above:
flutter test --coverage --coverage-path=lcov.info does work to generate the necessary file, but it's clunky to have to visualize it through a 3rd party program such as genhtml, all the more if you're on Windows.
And it does need Flutter in the end, which should not be necessary if you're working on pure Dart...
IntelliJ would supposedly work ideally, but I just can't seem to enable the Run with Coverage button on mine, even after installing the test_coverage package.
Though one person on Gitter told me he has it working on his IntelliJ.
Both the coverage and the test_coverage packages offer something close to what I described above, but their solutions are way clunkier — and on Windows they are tough to set up...
codecov.io is an alternative with a 3rd party, but it's annoying to have to handle this externally when the editor offers a much more flexible and faster experience.
And there is also the problem of ambiguous coverage, which is not clear with respect to codecov.io. For example, if one folder tests stuff that indirectly calls another folder, does that count as coverage for the indirectly called folder as well? That's almost always undesirable.
3. Other Resources
There's this old question on StackOverflow that was helpful initially.
You can take the genhtml.perl script here.
If you have Git for Windows installed on your machine, you already have Perl installed, it should be here: <git-install-dir>\usr\bin\perl.exe
Replace backslash characters (\\) with slash characters (/) in all file path lines (prefixed with SF:) in the lcov.info file.
Run genhtml.perl script. For example — assumed current working directory is root directory of your project —:
<git-install-dir>\usr\bin\perl.exe \
C:\Scripts\genhtml.perl \
-o .\coverage\html .\coverage\lcov.info
Note. It may be useful also to add the --prefix option.
As a result of these actions, you should get generated HTML report in the .\coverage\html/ directory. Open .\coverage\html\index.html file in your browser to see the report.
I hope this helps — at least, it worked for me.

How does one 'Extract Com Information' from an OCX without InstallShield?

In one of the projects our team is working on, we are trying to make an automated deployment system for an existing desktop application. In order to do that we need to understand how InstallShield installs the application to begin with.
We have access to the InstallShield manifest, but there is an OCX file that we cannot figure out how to install manually (without InstallShield). This particular OCX file is set to 'Extract COM Information'.
Here's a screenshot:
The other OCXs in this application are self-registering, so they can be registered with Regsvr32.exe. But the OCX we are having problems with cannot be registered in that fashion.
How would one manually install an OCX file that is set to 'Extract COM Information' in an InstallShield manifest?
RegSvr32.exe calls the LoadLibrary API to load your DLL and then invokes the DllRegisterServer entry point inside your DLL. The code inside that function does the actual COM registration. If RegSvr32 is failing, that typically means a dependency of your DLL is missing or invalid.
InstallShield does all of this along with some really low level bit hacking to virtualize all of this and then harvest it. An old article on the subject is:
Spying on Registry Entries
InstallShield doesn't actually use this technique per say ( they have several techniques, most of which is not documented and various filters and transform engines to clean up the data ). If you are just looking for a way to do it without InstallShield, then look at Windows Installer XML's "Heat" command line tool. This can "harvest" COM metadata into WxS XML elements.
Also WiX is open source so if you are really curious you could go looking at their code.
As Christopher mentioned, InstallShield extracts COM information from your .ocx by seeing what it registers when invoked similarly to regsvr32.exe will invoke it. Its various forms of redirection (for capturing purposes) have the added benefit of working around several potential permissions problems while the file is registering in your build environment. However if I'm not missing the point of your question, it's "why doesn't regsvr32.exe your.ocx work on the target machine?"
This is a bit of a stab in the dark, as you haven't included enough information. While missing dependencies can cause this, I'm going to guess you only see this failure on Windows Vista/Server 2008 or higher. If this is the case, there's a good chance your application is trying to write to registry keys that are protected by Windows Resource Protection (WRP), or is being tripped up by a per-user typelib registration problem.
When a poorly behaved self-registration routine encounters WRP, it attempts to write to a registry key it lacks permission to modify, then fails the entire registration. I'm uncertain what happens to the keys it wrote before that point, but all ones after it definitely never make it to the machine. You should be able to confirm whether this is the case with a tool like Process Monitor.
What do you do if this is the case? Well, you can stick with an extraction approach like that of InstallShield (which you say you want to leave). You can fix the file to not attempt to write to protected keys (which you say you cannot modify). Or you might be able to use the Application Compatibility Toolkit (ACT) to shim things, but I don't see how you can generally do that downstream. Generally speaking, I would recommend fixing the file, or continuing to use a working approach.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Is is possible to compile projects with "IDE-Managed Components" through the command line?

I've been trying to build some huge projects in BCB5 for some time now. I want to use the command line tools because it would cut build time by more than 50% (it already takes 4 hours in the IDE). Often, projects will build just fine in the IDE but fail miserably in the command line. I did some digging and discovered this nice little comment in a header file:
__published: // IDE-managed Components
Is this saying that the components that follow can only be built with the IDE open? Please tell me there is a way around this. BCB5 is starting to make me depressed.
Extra info:
Make.exe gives a pile of errors claiming ambiguity between the header file and an imported file. I''m pretty sure the header file is supposed to be referencing the imported file though, rather than comparing with it.
In the header file:
#include <ComCtrls.hpp>
ComCtrls.hpp has the variable TTreeNode.
Error from make:
[exec] Error E2015 .\TMain.h 876: Ambiguity between 'TTreeNode' and 'Comctrls::TTreeNode'
__published: // IDE-managed Components Is this saying that the
components that follow can only be
built with the IDE open? Please tell
me there is a way around this. BCB5 is
starting to make me depressed.
No, this does not mean that you can only build the source in the IDE. It just means that this section is automatically populated by the IDE (the form designer)
While there are good third party solutions (as mentioned by the others) C++Builder 2007 and above made huge improvements in the build system. IDE build times are very similar to command line builds and the MSBuild integration now makes it possible to be sure that the same parameters are passed to the command line tools as are used by the IDE.
Have you tried installing the C++ Compiler Enhancements plugin, by Andreas Hausladen, which improves the compilation speed. I would also recommend installing the DelphiSpeedUp plugin.
I think you need to export the project as makefile, to compile from the command line, because C++Builder 5 project files are XML. Have a look at this article, from the C++Builder Developer's Journal.
If none of the above fails try the official C++Builder Forum.
I've more or less given up on the BCB5 command line tools. It appears that they are fundamentally broken.
I did, however, manage to find a nice open source tool, ProjectMaker, that uses the command line tools effectively. You can find it here: http://projectmaker.jomitech.com.
ProjectMaker fixes up a few of the problems with BPR2MAK, but it's not perfect. Most project build perfectly with ProjectMaker, some still require the IDE. It's not a perfect solution, but it does alright.