So I would like to use NuGet to manage the various projects I use for a specific project my team and I are working on. Up to this point, I have placed my .js library files in the /Scripts directory of my web solution (ASP.NET MVC 2) and referenced those. Of course, this was manual and was annoying to manage during upgrades, etc.
Now that I am using NuGet, I realize that the entire goal of NuGet is to make this fairly painless. In addition, it appears that I shouldn't have to check my packages into my repository (AKA I don't need to manage my external libraries anymore). However, when I grab jQuery (for example) from NuGet, it places its specific files in the /Scripts directory of my project.
Where I get confused - what, if anything, should I check into source control at this point? Do I still check in the /Scripts directory?
In addition, if someone else is working on this project and checks out the solution from source control, are the packages automatically downloaded (assuming the solution comes with a valid packages.config)?
I'm just trying to clarify a couple points before we start using NuGet full-time.
There are two scenarios for NuGet vs VCS: to check-in or not to check-in, that's the question.
Both are valid in my opinion, but when using TFS as VCS, I'd definitely go for a no-checkin policy for NuGet packages.
That being said, even when using a no-checkin policy for NuGet packages, I'd still checkin the content changes that those NuGet packages have done to my projects. The \Scripts folder would be checked-in in its entirety (not selective, not ignored).
The no-checkin policy for packages to me means: not checking in the \Packages folder (cloak it, ignore it), except for the \Packages\repositories.config file.
As such, you are effectively not committing any NuGet packages, and when using Enable-PackageRestore from the NuGetPowerTools (this will be built-in in NuGet v1.6 just around the corner), any machine that checks out the code and builds, will fetch all required NuGet dependencies in a pre-build step.
This is true for both local development machines as for build servers, as long as Enable-PackageRestore is enabled in your solution and points to the correct NuGet repositories (local, internal, external).
If you think about it, when installing a NuGet package that only adds references to some binaries, you'd already be doing the samething in a no-checkin scenario: you would not commit the \Packages folder's subfolders, but still, you'd commit the project changes (the added reference).
I'd say, be consistent (for any type of package), whether it contains binaries only, content only, or a mix. Do not commit the packages themselves, do commit the changes to your sources. (if only to avoid the hassle of looking up what changed content-wise)
NuGet, like Nexus, are artifact repository (artifact being any type of deliverable, including potentially large binary).
The side-effect is for you to not store in an VCS (Version Control System) elements that:
wouldn't benefit from VCS features (branching, merging)
would increase significantly the size of the VCS repository (no delta or weak delta storage)
would be quite hard to remove from a VCS repository (designed primarily to keep the history)
But the goal is for you to declare what you need (and let NuGet fetch it for you) instead of storing it yourself.
So you can version /Scripts as a placeholder, but you don't need anymore to versioned any of its content now fetched automatically.
Related
Should we remove the .che folder from Git when we use Web IDE Full-Stack?
The rule of thumb is to never include IDE-specific files into a Git repository. There are several articles and blogs on this and I would point you to this one: IDE Project Files In Version Control - Yes or No? Of Course, Not!
The main drawbacks of having IDE specific files checked-in are the following:
Each IDE would add its own files. E.g. if some of your developers would decide to use VSCode, then you would also have a .vscode folder in there.
The file structure may be different depending on the IDE version (if you use the SAP Web IDE Cloud, this should not be an issue, but it might be if one developer is using the local WebIDE).
The files change very frequently and lead to merge conflicts. E.g. if you do a deploy and also one of your colleagues does a deploy, then you will have a conflict when you want to merge your branch with his (assuming that you work on parallel branches).
The files may contain environment-specific settings. E.g. the name of the project folder, which may actually be different for each developer.
The only clear advantage is that setting up the project after a clone operation might be faster marginally (i.e. the developer which is doing the clone might have to do some settings locally on his copy).
What are the benefits for enabling the Nuget Package Restore feature in my Visual Studio solution?
The "Using NuGet without committing packages to source control" page in the Nuget Docs site suggests one reason:
When using a DVCS like Mercurial or Git, committing binaries can grow
the repository size like crazy over time, making cloning more and more
painful
Do you find this reason compelling enough to enable the feature? Are there other reasons?
There are more benefits of not having your NuGet package binaries in source control which are explained on another page, which I have reproduced here:
Distributed version control systems (DCVS) include every version of every file within the repository, and binary files that are updated frequently can lead to significant repository bloat and more time required to clone the repository.
With the packages included in the repository, team members may add references directly to package contents on disk rather than referencing packages through NuGet.
It becomes harder to "clean" your solution of any unused package folders, as you need to ensure you don't delete any package folders still in use.
These benefits are still related to not having the NuGet packages in version control. So the reason you highlighted is still valid.
Also note that the "Enable package restore feature" which uses MSBuild based package restore has been deprecated by the NuGet Team in later versions of NuGet because it has problems with NuGet packages that include custom MSBuild targets. Instead the NuGet team recommend to not use it but instead use the Visual Studio based package restore which will restore NuGet packages just before you build the project inside Visual Studio.
Also when NuGet was first released the NuGet team at the time expected you to put the binaries in source control so you did not have to rely on NuGet when you wanted to build your project. So a developer could clone the repository and build the project straight away without having to use NuGet. However you now have a choice.
When you are working with a large version control repository, such as MonoDevelop, then including binaries can make the repository very large. For smaller repositories including the NuGet packages is not really a problem.
I have multiple projects with common in-house JavaScript library dependencies. I want to share these dependencies across multiple projects.
Unfortunately we are using TFS. I'd like something like svn:externals, whereby I can link a particular folder to a different folder elsewhere in the source control tree. So I want to have
ProjectA
app
js
lib [should link to SharedProject/lib]
ProjectB
app
js
lib [should link to SharedProject/lib]
SharedProject
lib
library1.js
library2.js
I don't want to link across workspaces...I don't want a crazy custom per-developer setup. I just want developers to check out one project, and it knows "Oh, there are shared resources in this other project. I'll get those too." I don't care about it always getting a specific version; I'm just tired of copying files across projects.
Is this remotely possible in TFS? I have Googled and found nothing conclusive.
Just branch the shared project from its original location to where you want it to be.
When you would switch to next revision on svn:externals, simply merge changes up to that revision to the branched copy.
(frankly I prefer this way even on SVN)
Using external link in source is not a good idea. It creates lot of side effects. You can package and publish your library using NuGet to a private NuGet server and then consume the published packages in all the dependent projects.
I am looking at putting a code base that runs several website into version control. There are several instance of this code base running websites on different virtual servers.
The problem I'm grappling with is that each of these separate instances of more or less the same code have sub-directories with site-specific functions. But it seems that version control systems want to control the entire directory hierarchy.
For instance, each instance has the directory
/www/smarty/libs/plugins/
Where you'll find site-specific functions for smarty. When we are ready to put it into version control, the folder /www would be the root.
So one option is to have all the site-specific functions going out to all sites. I don't see a problem in and of itself, but it seems somehow architecturally 'wrong'. There would be a bunch of files that only belong to one deployment.
Another option is to have a separate repository for each site's specific files within the code base. But that sounds like it could quickly become a nightmare when trying to get new sites deployed properly.
What's the best way to do this? The version control system we're looking at is subversion.
Generally, source control systems should be used to control source. They are not at their best completely controlling file hierarchies, permissions, and other related things. These are best left to deployment configuration.
How about having each of the projects and directories you need represented once in the version control system. Then, in a separate directory (perhaps called /build/), have the various configuration layouts. You might have an ant file that builds each site, or maven. Or you can use tools like Capistrano or Fabric to have more control over each deployment.
The tools are made to be flexible (generally), so here are some suggestions:
Most VCS' allow you to ignore files and directories through some mechanism (e.g. Mercurial .hg ignore file), so you should be able to target what you want/should control versus what shouldn't be.
Separate the files/directories into common resource project and site-specific projects and then use a build system to integrate them to create a deployable package. The build system can be as simple as a shell script or a more sophisticated framework. If its a really simple integration, the VCS may have some basic features for merging bases (e.g. Mercurial subrespositories).
With subversion, you could have a bunch of repositories:
www be in a general repository
plugins each be in a site-specific repository
Then have nested working copies:
svn co http://www_repo www
cd www/smarty/libs
svn co http://foo_plugins_repo plugins
Tip: add plugins to svn:ignore property of www/smarty/libs
svn propset svn:ignore "plugins" www/smarty/libs
You could certainly do that with git too (through .gitignore), and probably with other version control systems but I don't know them.
(Alternatively you could skip the nested working copy part (which can freak some people out) and check out stuff side by side, but use a symlink in lieu of smarty/libs/plugins, while ignore still pertains)
You're missing a "build" step, which whould take the source in source control and create the deployment bundles for the different sites. Only one source package is needed, different build configurations create the different deployment packages. Don't try to directly put the deplyoment set into source control, it is not the source!
I believe the best thing to do would be to create a top level directory in your repository for each site (Site-01, Site-02, etc) and inside those directories put the source tree. Then you can checkout the projects separately. I think it's acceptable and somewhat standard to use the same repository for all the projects your company is involved with.
My terminology might be off kilter, but the fundamental idea is sound, I believe.
Suppose I have the following (desired) folder structure:
*CommonProject
*Project#1
----> CommonProject(link)
*Project#2
----> CommonProject(link)
Where the CommonProject is the location of the source belonging to that project, and CommonProject(link) is merely a soft link to the main location. If we imagine this as a tree-view in a visual client, if I expand Project#1 I will see CommonProject there as a subdirectory, even though the files are not actually stored there.
The purpose of this is to enable the following behaviour:
When I check out Project#1 I get the files associated with that project as well as a subfolder CommonProject containing all of its files (as if Project#1 contained the copy of the files in the Version Control repository). Now if I were to modify CommonProject's files inside of Project#1 and was to submit my changes to the repository, the changes would go into the CommonProject location (no file is actually stored locally under Project#1 in the repository). Now if I was to sync Project#2, as it also contains symlink to CommonProject, it will now get my updates.
Essentially the duplication of files only exists on my machine, but in the repository there is only one version of CommonProject.
I know Perforce can’t do this, without juggling 3 specs. This is very complicated and error prone, especially when a lot of people do it. Is there a source control repository out there that can do this? (a pointer to some docs on how it can be done is a plus)
Thank you.
Subversion can directly store symlinks in the repository. This only works for operating systems that support symlinks though, as svn just stores the symlink the same way it would with any other file.
I think what you really want is to link to separate projects though. Subversion supports this through externals and git through submodules. Another alternative is to manage this sort of thing with in your build process, so that some static resources are gathered when you initialize the build. Generally, updating a utilities library that changes often is going to cause stability problems, so you can do this manually (or with clever scripts) when you need to
You'd probably be much better off just storing the projects in a flat directory (1 directory per project, all at the same level), and using whatever you build system or IDE is to link all the stuff together.