Rush Monorepo - Shared dependency is duplicated in node_modules - rush

I'm using Rush to manage a monorepo, in which multiple packages depend on react-redux. They both specify the exact same version in their package.json file.
Project A:
Project B:
Project B also depends on Project A.
I would expect that since they both use the exact same package version, that they would both link to the same folder in common/temp/node_modules/.pnpm. But instead they are linked to two different folders, with some random string (maybe a hash?) appended:
This breaks things since Package B creates the redux <Provider> but when it calls functions from Package A, they look for the context created by one of the versions of react-redux but Package B initialised the provider with the other version.
Why is Rush making two copies of this same version? How can I prevent it from doing this and make both packages point to the same copy?

I fixed this issue - Disable useWorkspaces in rush.json

Related

Integrating a remote Swift Package that has local Swift Packages: how to avoid invalidManifestFormat errors?

The situation:
I have a Swift Package, call it lib. lib lives in its own repository. In lib's repository, there are a bunch of local Packages; that is, these are packages that are defined, in lib, using the local-path dependency format, .package(path: "CursorPackage"), and whatnot.
All of this is fine as long as I'm locally importing lib into my actual application repository. The moment I try to import lib into my repo using SPM's remote options, which is obviously the way to go for doing things with CI, it throws the following error:
invalidManifestFormat("'CursorPackage' is not a valid path for path-based dependencies; use relative or absolute path instead.")
This error persists whether I use CursorPackage or ./CursorPackage. Obviously I don't want to try to use an absolute path, because I'm on CI, so that would involve either hard-coding things or ingesting an environment variable somehow that contains PWD.
What am I missing? This seems like it should just work. Is this just a bug in SPM that I should be reporting to Apple?
This isn't intended to be possible. If you look at the 5th bullet in the Proposed Solution section of the local packages proposal (https://github.com/apple/swift-evolution/blob/main/proposals/0201-package-manager-local-dependencies.md) it says that it's not intended for a remote package to be allowed to depend on a local package.
I expect its because there's the possibility that you could have both a versioned package with a given name and a local package with the same name; if so how would SPM resolve the conflict?
It is unfortunate though since allowing this would allow for more options in how people organize their packages, I agree.

How to override the "unsafeFlags" behavior of Swift Package Manager?

Swift Pacakage Manager allows a package manifest (Package.swift) file to specify build settings for targets.
As a security measure, some build settings can only be specified using "unsafeFlags" parameter. For example, specifying a framework search path outside the current directory using the -F build flag is considered "unsafe" because it could lead to code execution outside the package's own directory.
For packages downloaded from the internet, this could be considered an undesirable behavior. However, for locally-declared packages, this could be what we want to do.
However the design of SPM is such that any package that uses "unsafeFlags" cannot be depended on by another package.
Is there any override for this, for example, if we want to use unsafeFlags somewhere in a dependency structure of various locally-declared Swift packages?
Like, is there a setting we can supply for a package, framework, or app, so that it's allowed to depend on packages that use "unsafeFlags"?
Swift Package Manager allows unsafeOptions for dependencies specified by a commit hash. They're not allowed for versioned dependencies.
Example here.

Nuget package missing .target file in build folder

I have build a nuget package at published it to a nuget.server site. But when I try to use the package form the server the .targets file from build folder is not in the file. But if I use the package from a local folder it works as it should. How do I get it to work ?
If i look in the package in the folder on the server it looks ok.
It's not clear to me if you mean using (referencing and restoring) a package, or building (packing) a package.
If the problem is with packing the nupkg, NuGet requires the props and targets files to have specific filenames in specific folders, but if you got it to work at least once, you probably already know that. If this is not the problem with packing, you need to give us more information because not using the correct filename convention is the most common problem and I can't guess what else the problem could be. In particular, if the package is being packed differently on your local machine compared to when it is packed on the server, it means there is something different between how you pack on the two computers, so we need more information about how the build and pack work with your project.
If the problem is with using (restoring) the package, there are a few possibilities. My best guess is that you once had a package without the targets file working correctly, and you restored the package on the server. By design, NuGet packages are immutable which means it's invalid for the contents of a package (same ID and version) to change. This allows NuGet to download the package from a remote feed once, save it in the global package folder (not a cache; they never expire) and the next time NuGet needs to restore the same package (id + version) it uses the one in the global package folder, it does not download again. This means if you once built a bad nupkg and restored it on a machine, then fix the nupkg and kept the same version number, that machine will never get the fixed nupkg. You need to delete it from the global packages folder. I'm not 100% sure, but I think if you have a local file feed and you restore a project that uses packages.config, the nupkg does not get saved in the global packages folder, so doesn't have the same problem. In short, I think the problem is that you changed the nupkg contents once without changing the version number, and one of the machines has the old copy in its global packages folder that it keeps using.
If that's not the problem, the next most likely cause is that the nupkg on the server feed has different contents to the nupkg in the local feed. I've never used NuGet.Server, but some nuget respositories (like nuget.org) do not allow overwriting nupkgs. So, if you pushed a nupkg to your server, fixed a problem in your nupkg without changing the version, then tried to push again, the second push might have failed.
In summary, your question doesn't provide enough information for us to help you, but I wrote about the most common issues above. If it doesn't help, you need to provide us with more information. An example of the problem is the best way to give us enough information to help you.

How to setup a DotNetNuke Development Environment with Source Control?

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.

How to manage external dependencies which are constantly being modified

Our development uses lots of open-source code and I'm trying to figure out what the best way to manage these external dependencies.
Our current configuration:
we are developing for both linux and windows
We use svn for our own code
external dependencies (boost, log4cpp, etc) are not stored in svn. Instead I put them under ./extern (or c:\extern on windows). I don't want to put them in our repository because I will not be able to update them that way. Some of these are constantly being updated.
My questions
What to do if I need to modify external code?
Currently I have created a folder in my svn repository called extern_hacks and that is where I put the modified external code. I then link (or copy on windows) the files into the external directory structure. This solution is problematic since it is hard to keep track of copying the files, and very hard to update from svn when files are sitting in two repositories (mine for the modified files, and the original repository say sourceforge)
How to manage versions of external dependencies?
I'm interested to hear how others deal with these issues. Thanks.
I keep them in svn, and manage them as vendor branches. Keeping them loose externally makes it very hard to go back to a previous build, or fix bugs in a previous build (especially if the bug is from a change to the external dependency)
Keeping them in svn has saved me lots of headache, and also allows you to get a new workstation able to work on your codebase quickly.
I do not understand why you say
I don't want to put them in our repository because I will not be able to update them that way. Some of these are constantly being updated.
You really need to
include external dependencies in your source control and periodically update them and then tese, test, test.
Coordinate your build process with the updates for the external dependencies.
If your code depends upon something, then you really need to have control over when it gets updated/modified. Coding in a space where these dependencies can get updated at any time is too painful as you're no doubt finding out. I personally prefer option 1.
When I had to do something like this, I added the external source as external, and then applied a patch to it. The patch contains my modifications to the external source. So, I actually only version control my patches. Most of the times this works, if there are no "dramatic" changes in the external code.
Have you considered Maven? It's a build system that has excellent support for managing dependencies. For each project you can specify the required dependencies in an xml file as part of that project. The external libraries are held in a dependency repository (in our case Artifactory) this is separate from your version control system and can just be a network drive. It also allows managing different versions of projects.
I would be careful considering Maven because:
it is another repository in a system where you already have a repository with your current version control system;
it (Maven) is based on the only "common version control" every developer have, the file system (which means no metadata, or properties attached to the file, no proper history in term of who modified what and when)
Now when dealing with third-parties, you can consider having them in your version control system, but in a packaged way: that is in a very compact way, with sources and documentations zipped, in order to have the least possible number of files.
That way, you will manage the deployment of those (many) third-party libraries easily since the number of files to deploy is low.
Plus, having them under source control allows you to make a branch (say, a 'hack' branch), in which you will stored the packaged (or zipped) version of the hacked library.
What you can store in an external way is the un-zipped, complete set of files representing those libraries since there is no real development on them, or just a punctual hack: normally, your job is not to develop existing libraries, but to use them (even a bit modified) for implementing faster some features of your project.
If you need at some point to compare some hacked version with some official version, you will just pull out from svn the appropriate 'hacked' version number, unzip-it and compare-it with the official (and externally stored) version (with winmerge for instance)