Can I run doxygen on a url/ online repository? - doxygen

I have a large project that I would like to be able to have a top level doxygen build with lots of smaller doxygen builds (connected through tagfiles). Would it be possible to run doxygen at the top level without having to checkout the entire project by running doxygen on a virtual subversion repository?

Related

How can one view line-by-line code coverage in GitHub?

I have a Node.js project, which includes Istanbul, a tool that can generate a code coverage report. I would like to see which lines are not covered and which are covered by tests in my GitHub Pull Requests for this project.
Istanbul allows me to see the coverage locally in the coverage/lcov-report directory, but I would like to see this the "Files Changed" tab of my Pull Requests in GitHub.
There are some tools that provide visual line-by-line coverage:
CodeCov does this, but requires that I upload a coverage report to their servers.
Covaralls does also, but requires that you give them access to your repository.
Jest coverage report uses GitHub actions to add annotations, but cannot add visual indications (e.g. highlighting) of line-by-line test coverage.
Is there any other way to visualize line-by-line test coverage in a GitHub Pull Request?

Industry standard for managing Nuget packages within the enterprise

I have a situation where we have multiple C# projects that use similar set of Nuget packages (ex. Newton Json, Microsoft Compilers and CodeDom, Owin, log4net, jQuery, EntityFramework etc.)
I am trying to see how we can have a shared location for all Nuget packages to reduce the footprint of those binaries in Git, having a single repo for them by centralizing them in one place.
One option I found is to use Nuget.config in each project with repositoryPath set to point at the shared location. This works great for adding/upgrading/restoring Nuget packages in the project but it is not very clean when a package gets removed from one project but is still required in a different one. Basically the package will get removed from the shared location and the change is committed to Git, then when the other project requires it, it would get restored and added back to Git. Not a perfect solution in my mind.
I have a two part question:
1. Is there a way to improve the above workflow when packages get removed?
2. What is the industry standard for handling third party libraries delivered via Nuget? Or if there is none, can you share your experience handling Nuget packages across multiple projects.
If the concern lies with the footprint/organization of the Git repository, maybe you can do a .git ignore for the dependencies folders to prevent git from committing them into the repositories. When you need to build the projects from source, just do a dotnet /nuget restore to get the dependencies from the source you configured in the Nuget.config
Not sure if it is the industry standard, but we host our own Nuget server to control the libraries that the different teams can use. Microsoft has an article on Hosting your own NuGet feeds
Hope it helps.

Should I publish the nuspec file in a repository?

When creating an open source library on GitHub or another public website, should I publish the .nuspec file that describes the corresponding NuGet package?
I've done this a couple of times (since no API key or other sensitive information is included in the .nuspec file) in order to allow myself to easily publish subsequent versions without keeping private file, and to allow other people to fork it and add their own descriptions easily. However, the developers of many top packages don't seem to publish .nuspec files in their repositories (sometimes they publish NuGet.exe along with a .targets file, and so on), so I'm thinking that maybe I'm doing something wrong.
The package authoring should be considered part of the source code since it is a required asset to build the fully usable output.
Some projects use special msbuild-based tooling to create the nuspec file during the build so it seems like there is none in the repository. the new "SDK-based" projects (e.g. .NET Standard libraries) have integrated nuget tooling to be able to create a nupkg file from the csproj without the need to create a nuspec file. This tooling is also being adopted by some popular packages (e.g. Newtonsoft.Json).

Can a VIPM package repository be set up on GitHub?

VIPM stands for Virtual Instrument Package Manager. It is a manager of install-able packages for NI LabVIEW. It is published by JKI Software and a free version of it is distributed with LabVIEW.
Registered (paying) users can set up public or private VI Package repositories. I would like to set one up on GitHub.
I attempted to do so by first creating a VI Repository on my local hard drive, publishing some packages to it, then making a remote clone on GitHub. Using the VIPM Repository Manager, I added the repository by browsing to the index.vipr file on my remote GitHub clone. However, VIPM gives me an error saying that the repository was not found.
Has anyone managed to set up and subscribe VI package repository on GitHub?
The short answer is that GitHub and a VIPM repository are fundamentally different and unless VIPM adds support for git repositories and GitHub then I doubt it will be possible.
If you are considering managing dependencies of any project using GitHub as a source for your shared libraries then you might want to consider a package manager like yarn.
Yarn (and others like npm and bower) are capable of fetching (cloning) from GitHub and follow the common practice from the Web developer world (and others) of having all a project's dependencies contained within the project; This is a departure from the VIPM view where you update your development environment (LabVIEW) by installing the packages 'globally'.
A list of the project's installed libraries and the library versions are stored in a human-readable file called package.json which provides a portable way of getting the project setup on another machine.
As new releases of the libraries happen, you can choose when to update the library in your project by selectively updating the libraries.
This approach works well with LabVIEW packed libraries (.lvlibp) as opposed to VIPM packages as there is no install-into-LabVIEW-IDE step with packed-libraries. If you have a hierarchy of packed libraries then they can also specify their dependent libraries using a package.json and then yarn can install all the libraries recursively.
It is possible to configure Yarn to place libraries into a folder of your own choosing instead of the default node_modules (as used by Node.js).
The advantages of this are:
You can choose which versions of libraries to use per project
Package managers integrate nicely with automatic testing and building setups
You can use GitHub or other git-providers for publishing your libraries
The disadvantages are:
More setup
It is not a common approach in the LabVIEW development world
Your VIs won't be installed into the LabVIEW palettes unless you explicitly install them

Simulink Project dependency management and dependency resolution

What is the best practice for managing dependencies within a Simulink Project when the project is worked on across a team and the project has dependencies on different models and libraries?
An parallel example would be when building an application using Gradle and declaring the dependencies of a project including the required version numbers. Grade will resolve and download the versions that are required to build the project.
e.g. the following declares a dependency on version 2.1 of library and version 1.0 upwards of some-library, so that the latest version 1.x (1.0, 1.1, 1.2...) that is available will be downloaded and used.
dependencies {
compile("com.example:library:2.1")
compile("com.example:some-library:1.+")
}
The documentation for Simulink (and also here covering manifests) seems to talk about models within a project having version numbers. It doesn't seem to mention libraries that are imported into the project. Models that are only used within a single project could all be contained in the overall project, but what happens if there are (for example) generic S-Functions defined within a separate project or library (or library defined within a project) that are applicable across multiple projects? This requirement is all with the aim of helping to support an automatic build process triggered by a Continuous Integration server, such as Jenkins.
I'm interested in a workflow that will easily support dependency management and automatic dependency resolution with a Github Flow git branching policy.
I've spent much time on this problem. Finally I didn't find an appropriate solution online, but I'd like to share the workflow we are using now and which fulfills our needs.
In short: We created our own dependency management by using git submodules.
Assumption: In fact, it is more a version management of persistent dependencies rather than offering the possibility to dynamically add new or remove old packages or libraries. This also works, but requires the git submodules to be added to or removed from the main git repository.
Objectives:
Consistent setup for everyone who works on the project.
Traceability of depdendencies.
Continous Integration with less effort.
How we do it (Example):
We have Project A and Project B which shall be used in Project C.
All three projects are under git version control and still under development.
We have set up additional release repositories for Project A and Project B, e.g. located on a network drive.
In Project C we add the release repositories of Project A and Project B as git submodules
We have set up some kind of auto-deployment to push only relevant files into these release repositories. For example if we want to make changes of Project B accessible to Project C, we only create a version tag in Project B's repository and it gets pushed to its release repository.
In Project C we update our git submodules and can checkout a new submodule version (if needed).
Advantages:
Since git stores the checked out version (commit) of git submodules in the main project, we can ensure that everyone works with the same files.
Changing the commit of a submodule is traceable in the main project.
The relation between the main project and the dependencies is always consistent.
Continuous Integration should work "out of the box". We are using GitLab and GitLab Runner and only had to setup our runner to fetch submodules recursively (in case of nested submodules).
I think this approach works as long as the repositories won't get too big, since you do not fetch only the version you need but also the whole version history.