Can a VIPM package repository be set up on GitHub? - github

VIPM stands for Virtual Instrument Package Manager. It is a manager of install-able packages for NI LabVIEW. It is published by JKI Software and a free version of it is distributed with LabVIEW.
Registered (paying) users can set up public or private VI Package repositories. I would like to set one up on GitHub.
I attempted to do so by first creating a VI Repository on my local hard drive, publishing some packages to it, then making a remote clone on GitHub. Using the VIPM Repository Manager, I added the repository by browsing to the index.vipr file on my remote GitHub clone. However, VIPM gives me an error saying that the repository was not found.
Has anyone managed to set up and subscribe VI package repository on GitHub?

The short answer is that GitHub and a VIPM repository are fundamentally different and unless VIPM adds support for git repositories and GitHub then I doubt it will be possible.
If you are considering managing dependencies of any project using GitHub as a source for your shared libraries then you might want to consider a package manager like yarn.
Yarn (and others like npm and bower) are capable of fetching (cloning) from GitHub and follow the common practice from the Web developer world (and others) of having all a project's dependencies contained within the project; This is a departure from the VIPM view where you update your development environment (LabVIEW) by installing the packages 'globally'.
A list of the project's installed libraries and the library versions are stored in a human-readable file called package.json which provides a portable way of getting the project setup on another machine.
As new releases of the libraries happen, you can choose when to update the library in your project by selectively updating the libraries.
This approach works well with LabVIEW packed libraries (.lvlibp) as opposed to VIPM packages as there is no install-into-LabVIEW-IDE step with packed-libraries. If you have a hierarchy of packed libraries then they can also specify their dependent libraries using a package.json and then yarn can install all the libraries recursively.
It is possible to configure Yarn to place libraries into a folder of your own choosing instead of the default node_modules (as used by Node.js).
The advantages of this are:
You can choose which versions of libraries to use per project
Package managers integrate nicely with automatic testing and building setups
You can use GitHub or other git-providers for publishing your libraries
The disadvantages are:
More setup
It is not a common approach in the LabVIEW development world
Your VIs won't be installed into the LabVIEW palettes unless you explicitly install them

Related

Industry standard for managing Nuget packages within the enterprise

I have a situation where we have multiple C# projects that use similar set of Nuget packages (ex. Newton Json, Microsoft Compilers and CodeDom, Owin, log4net, jQuery, EntityFramework etc.)
I am trying to see how we can have a shared location for all Nuget packages to reduce the footprint of those binaries in Git, having a single repo for them by centralizing them in one place.
One option I found is to use Nuget.config in each project with repositoryPath set to point at the shared location. This works great for adding/upgrading/restoring Nuget packages in the project but it is not very clean when a package gets removed from one project but is still required in a different one. Basically the package will get removed from the shared location and the change is committed to Git, then when the other project requires it, it would get restored and added back to Git. Not a perfect solution in my mind.
I have a two part question:
1. Is there a way to improve the above workflow when packages get removed?
2. What is the industry standard for handling third party libraries delivered via Nuget? Or if there is none, can you share your experience handling Nuget packages across multiple projects.
If the concern lies with the footprint/organization of the Git repository, maybe you can do a .git ignore for the dependencies folders to prevent git from committing them into the repositories. When you need to build the projects from source, just do a dotnet /nuget restore to get the dependencies from the source you configured in the Nuget.config
Not sure if it is the industry standard, but we host our own Nuget server to control the libraries that the different teams can use. Microsoft has an article on Hosting your own NuGet feeds
Hope it helps.

Nuget: Good idea to check in package folder

I'm actually thinking about the pro and cons about using NuGet. In our current software we're storing each external reference in a common reference folder (which is commited to our SW versioning system). Over time this approach becomes more and more painful because we've to store different versions to the same library.
Since our devs are sometimes at the customer site (where not all customers are offering internet connectivity ...) we won't use NuGet directly, because NuGet packages can't be restored.
Based on that I'm actually thinking about using NuGet and store the packages folder in our SW versioning system.
Does anybody know if there are some disadvantages about this solution? Does anybody have a better proposal?
Thx.
I would argue against storing external nuget packages in your version control system.
It's not your application's responsibility to archive third party packages. Should you need to take care of that risk then build a solution intended for such (for example: use private nuget repository that's properly backed up).
Avoid duplication in code base - provided you use properly released packages, then the packages.config file content is sufficient for reliably reproducing the exact dependencies your application needs.
Synchronization is an effort - keeping packages.config and packages folder in sync- once you start including them in source control every developer working with packages would monitor and add or remove packages to source control.
If devs ever forget to add then local build still fails.
If they forget to remove no longer necessary piece then your downloadable set would contain junk.
VCS dataset size - storing them would needlessly enlarge your version control storage. Quite often the packages contain N different platform dlls, tools and whatnot which add up quite fast. Should you keep your dependencies constantly up to date, then after 10 years your VCS history would contain huige amount of irrelevant junk. Storage is cheap, but still..
Instead, consider having a private nuget repository with the purpose of serving and archiving the packages your application needs and set up your project to check your project nuget repository first. If your developers need offline compile support then they can set up project repository mirrors on their build boxes and configure the following fallback structure for repos:
Developer local project repository (ex: folder)
Shared project repository (ex: Nuget.Server)
(nuget.org)
A guide how to configure multiple repositories can be found here: How to configure local Nuget Repository.

Simulink Project dependency management and dependency resolution

What is the best practice for managing dependencies within a Simulink Project when the project is worked on across a team and the project has dependencies on different models and libraries?
An parallel example would be when building an application using Gradle and declaring the dependencies of a project including the required version numbers. Grade will resolve and download the versions that are required to build the project.
e.g. the following declares a dependency on version 2.1 of library and version 1.0 upwards of some-library, so that the latest version 1.x (1.0, 1.1, 1.2...) that is available will be downloaded and used.
dependencies {
compile("com.example:library:2.1")
compile("com.example:some-library:1.+")
}
The documentation for Simulink (and also here covering manifests) seems to talk about models within a project having version numbers. It doesn't seem to mention libraries that are imported into the project. Models that are only used within a single project could all be contained in the overall project, but what happens if there are (for example) generic S-Functions defined within a separate project or library (or library defined within a project) that are applicable across multiple projects? This requirement is all with the aim of helping to support an automatic build process triggered by a Continuous Integration server, such as Jenkins.
I'm interested in a workflow that will easily support dependency management and automatic dependency resolution with a Github Flow git branching policy.
I've spent much time on this problem. Finally I didn't find an appropriate solution online, but I'd like to share the workflow we are using now and which fulfills our needs.
In short: We created our own dependency management by using git submodules.
Assumption: In fact, it is more a version management of persistent dependencies rather than offering the possibility to dynamically add new or remove old packages or libraries. This also works, but requires the git submodules to be added to or removed from the main git repository.
Objectives:
Consistent setup for everyone who works on the project.
Traceability of depdendencies.
Continous Integration with less effort.
How we do it (Example):
We have Project A and Project B which shall be used in Project C.
All three projects are under git version control and still under development.
We have set up additional release repositories for Project A and Project B, e.g. located on a network drive.
In Project C we add the release repositories of Project A and Project B as git submodules
We have set up some kind of auto-deployment to push only relevant files into these release repositories. For example if we want to make changes of Project B accessible to Project C, we only create a version tag in Project B's repository and it gets pushed to its release repository.
In Project C we update our git submodules and can checkout a new submodule version (if needed).
Advantages:
Since git stores the checked out version (commit) of git submodules in the main project, we can ensure that everyone works with the same files.
Changing the commit of a submodule is traceable in the main project.
The relation between the main project and the dependencies is always consistent.
Continuous Integration should work "out of the box". We are using GitLab and GitLab Runner and only had to setup our runner to fetch submodules recursively (in case of nested submodules).
I think this approach works as long as the repositories won't get too big, since you do not fetch only the version you need but also the whole version history.

What are the advantages of using Nuget automatic package restore in the enterprise?

I've been trying to implement a Nuget policy inside my company and I was wondering what is the real value of Automatic Package Restore when working on internal projects on a internally hosted TFS.
I understand that in OpenSource projects or when using externally hosted source control not checking in external packages can save a lot of disk space, but apart from this advantage (saving disk space on the server) I cannot see any other advantage in using the automatic restore: actually it gives us some problem as the build machine doesn't connect to internet and for using that feature we'd either change firewall rules or keeping a local cache of the nuget repository.
Thank you
As Steven indicated in his answer the main reason for keeping the packages out of source control is to reduce the amount of data that needs to be stored by and transferred from/to the source control server. Obviously in a company where you control all the hardware neither the disk space nor the network transfer should be an issue, but why waste the disk space / time dealing with the packages if you don't need to. Note that the size of the package directory can be a quite considerable percentage of the total size of a workspace. In my case the packages directory takes up between 80% - 95% of the total workspace when using TFS (for my work workspaces) and between 50% - 75% for my private workspaces (which are based on git and thus have the .git directory which takes up some space). All in all a significant amount of space could be saved on your source control server if you use package restore.
One way to solve the access problem with Nuget.org is to have your own local package repository. This local package repository could be a local nuget web service or just a shared directory on a server. Because the repository lives inside your company LAN it should not be a big problem for the build server to get to the local nuget repository.
A side benefit of this approach is that your build process is independent from Nuget.org (in the very rare case it goes down) and, more importantly, that you know exactly which packages get pulled into the build (because they will be the approved packages in your local repository).
For us the decision whether to use a local nuget repository and the package restore option depended on our decision to package all our internal libraries as nuget packages (I have described our development process in an answer to another nuget question). Because of that we needed a local package repository to distribute those internal packages. Which then meant that adding the third-party packages to this repository was easy and thus using package restore made perfect sense.
If you don't package your internal libraries as nuget packages then you will put those in your source control along side your solution and code files. In that case you may as well do the same for the third-party libraries.
In the end it is all a trade-off. Disk space vs ease of infrastructure set-up etc. etc. Pick the solution that suits your environment best.
From Nuget.org:
The original NuGet workflow has been to commit the Packages folder into source control. The reasoning is that it matches what developers typically do when they don't have NuGet: they create a Lib or ExternalDependencies folder, dump binaries into there and commit them to source control to allow others to build.
So the idea is that the packages would be restored on each machine that the project would be built upon so the binary data stays out of source control. And with distributed source control systems like Mercurial or Git, that is disk space and bandwidth that get's used on client machines as well as the server.
But that does pose a problem when your build machine can't connect to the internet to nuget.org (I'm assuming). I think you've hit the major solutions. Commit the packages to source control and avoid package restore, allow the build machine to connect to the internet, or setup a local mirror.
I'll stop short of saying there are no advantages of package restore in your environment. I don't see big harm in committing the packages. But it really depends on how your team functions, what your team expects and what type of enterprise environment you're working in.

How to manage share libraries between applications?

We develop enterprise software and we wish to promote more code reuse between our developers (to keep this problem simple, let's assume all .NET). We are about to move to a new VCS system (mostly likely mercurial) and I want to have a strategy in place for how we will share libraries.
What is the best process for managing shared libraries that meets the following use cases:
Black Box - only the public API of the library is known and there is no assumption that consuming developers will be able to "step into" or set breakpoints into the library. The library is a black box. Often a dev does not care about the details, just give me the version of the lib that has always "worked".
Debug - the developer should be able to at least "step into" the library during development. Setting breakpoints would be a bonus too.
Parallel Development - while most likely the minority, there are seemingly valid use cases for developing the library in parallel with the consuming application. Often the authors of the library and component are the same developer. For better or worse, the applications and libraries can often be tightly coupled. Being able to make changes and debug into both can be a very productive way for us to develop.
It should be noted that solving 3, may implicitly solve 2.
Solutions may involve additional tools (such as NuGet, etc.).
By sharing libraries, you must distinguish between:
source dependencies (you are sharing sources, implying a recompilation within your project)
binary dependencies (you are share the delivery, compiled from common sources) and link to it from your project.
Regarding both, NuGet (2.0) finally introduced the "Package Restore During Build", in order to not commit to source control whatever is build in Lib or ExternalDependencies folder.
NuGet (especially with its new hierarchical config, NuGet 2.1) is well suited for module management within a C# project, and will interface with both git and Mercurial.
Combine it with the Mercurial subrepos, and you should be able to isolate in its own repo the common code base you want to reuse.
I have 2 possible solutions to this problem, neither of which seems ideal (and therefore why I posted the question).
Use the VCS to manage the dependencies. Specifically, use mercurial subrepos and always share by source.
Advantages:
All 3 usecases are solved.
Only one tool is required for source control and dependency management
Disadvantages:
Subrepos feature is considered a feature of last resort by Mercurial developers and from experimentation and reading has the following issues:
Tags cannot be easily or atomically applied to multiple repos.
Root/Shell repos are inherently fragile (can be broken if the pathing to subrepos changes). Mercurial developers suggest mitigating this issue by including no content in the shell repo and only use it to define (and track the revision) of the subrepos. Therefor allowing a dev to manually recreate a moment in time even if the subrepo pathing is broken.
Branching cannot cross repo boundaries (most likely not a big issue as one could argue that branches should only occur in a given subrepo).
Use Ivy or NuGet to manage the dependencies. There are two ways this could work.
Dependencies/Packages can simply contain official binaries. A build server can be configured to publish a new dependency/package into the company repository when a developer submits a build for new version. This solves case 1. Nuget seems to support symbol packages that may solve case 2. Case 3 is not solved and leaves developers in that case out to dry and come up with there own solution (there is basically no way to commit applications to the VCS that include dependencies by source). This seems to be the traditional way that dependency management tools are used.
Dependencies/Packages can contain a script that gets the source from mercurial. The script could be automatically executed when the dependency/package is installed. Some magic has be performed to have the .NET solution include the reference by project (rather than by browsing the filesystem), but in theory this could happen in the NuGet install script and reversed in the uninstall script.
Switching between "source" and "binary" dependencies seems to be a manual step. I would argue devs should switch to binary dependencies for releases and perhaps this could be enforced on the build server when creating a release. This further complicated by the fact that the VS solution needs to be modified to reference a project vs a binary.
How many source packages exists? Does every binary package contain the script to fetch the source that it was built with? Or do we create separate source packages that use the install script magic to get the source? This leads to the question is there a source package for every tag in mercurial? Every changeset? Or simply 1 source package that just clones and updates to the tip and leaves the dev to update to a previous revision (but this creates the problem of knowing what revision to update to).
If the dev then uses mercurial to change the revision of the source, how can this be reflected in the consuming application? The dependency/package that was used to fetch the source has not changed, but the source itself has...