Why does NuGet put packages at the "solution level"? - nuget

When you add a nuget package to a project it puts the assemblies in a /packages folder at the solution level.
I know that there are ways to change this, but I'm wondering why this is the default location, as it seems very unhelpful for these reasons:
1) If you have a project that is part of multiple solutions, the /packages folder won't necessarily be where you the project expects it.
2) You are expected to manually check it into source control for other team members, which is much less convenient than if it was part of the project that needs it.
3) If you move the project somewhere else on the file system or to a different machine that doesn't have the full code base, it won't find the /packages folder where it expects to.
It seems all of these would be resolved if NuGet just used a /packages folder inside the project, not the solution. And that seems like a much more logical place to put packages that the project relies on anyway.
So... I'm assuming that there were/are some good reasons for doing it at the solution level, and I'm hoping someone can enlighten me.

You should have a read at this, that explains how to use nuget without commiting packages to your source control, and by side effect solve points 1 and 3 of your question : http://blog.davidebbo.com/2011/03/using-nuget-without-committing-packages.html

I think it's to save disk space. If you had a large solution with 50 projects and you used a package in every one of those, you would end up with 50 copies of that package, binaries and all. Whereas keeping them at solution level is far more efficient in that respect.
In terms of source control, you shouldn't be putting your actual packages folder in there. Just add the packages.config file and either do what David Ebbo suggests in the blog post mentioned by mathieu or create a simple batch file to download all your packages based on the packages.config files it can find.
It's not much effort to create your own company nuget feed, so you can keep your private packages in there.

Related

How to create several flash application sharing common codebase in FlashDevelop/ActionScript 3.0?

Situation:
I need several swf/exe output files compiled in FlashDevelop from several projects. More than 60% of ActionScript 3.0 source is common for all project, rest are project-specific. How can I organize that in FlashDevelop? I want to have "one-click-to-build all" setting without duplicating common codebase (so when I need to fix something I do not need to copy-paste solution into several files).
All sources are under develeopment and will change very often.
A straightforward solution is to make an external classpath, for instance:
c:\dev\shared_src\
c:\dev\project1\
c:\dev\project2\
Then configure each project:
Project Properties > Classpath
Add Classpath > select '../shared_src'
PS: of course you should keep everything under source control.
Using svn:externals you could structure your repository in such a way that the commom parts are stored just once in the source control system, so changes made can be synchronised with just a single commit and update cycle.
For example, imagine that you have ^/ProjectA and ^/ProjectB, each of with require ^/Common as a sub directory.
Using svn:externals, pull ^/Common into both projects.
The exact nature of doing this will depend on the version of svn you use, and any client you use (such as TortoiseSvn). Refer to the relevant edition of the svn book for specifics.
The ease of implementing this will depend quite a lot on how separate the common code currently is in your application; and pulling in directories as directories is much more practical than trying to pull them into an existing directory; and unfortunately wildcards for filepaths are not supported.
However, based on your description of your aim; this is the most straight-forward solution I can imagine.
Hope this helps.

Should I put my output files in source control?

I've been asked to put every single file in my project under source control, including the database file (not the schema, the complete file).
This seems wrong to me, but I can't explain it. Every resource I find about source control tells me not to put generated output files in a source control system. And I understand, it's not "source" files.
However, I've been presented with the following reasoning:
Who cares? We have plenty of bandwidth.
I don't mind having to resolve a conflict each time I get the latest revision, it's just one click
It's so much more convenient than having to think about good ignore files
Also, if I have to add an external DLL file in the bin folder now, I can't forget to put it in source control, as the bin folder is not being ignored now.
The simple solution for the last bullet-point is to add the file in a libraries folder and reference it from the project.
Please explain if and why putting generated output files under source control is wrong.
You haven't explained what "the database file" is.
I would certainly include 3rd party libraries in source control, as they're necessarily for the build and it's good to have a way of reproducing a build at a later time with the library versions you used at that particular moment. But yes, those libraries should be included from a "libraries" folder rather than the output directory.
I wouldn't generally include my own libraries built from the sources elsewhere in the same repository - although I have been in situations where that's been worth doing, where some projects didn't use the "latest and greatest" version of a common library, but just occasionally updated.
The most important practical argument I'd give against including everything, in a world where disk, processor and network are considered free and instantaneous, is that it makes it harder to tell what really changed for any given commit. It's easier to look down a list of 3 source files than 3 source files and 150 binaries from the obj/bin directories.
Generated output files (in general) are "dangerous" in a VCS because:
what you need to version is how to regenerate them: the day you will need to actually update them, chances are you won't remember how to do it
they can contain some private generated file which make them work on the committer desktop, but not on a client one ("works on my machine" TM syndrome)
some generated file are not easily stored in delta (binary especially), making them consuming lots of space (and the topic of cleaning that space will come-up someday...)
External libraries are not generated directly by your project, and can be put in a VCS, although external repositories like a public Maven repo are better at this kind of management.
Do we also put compiled object files such as class files, executables, DLLs build from our source? What about when we're doing serious volume testing and that database becomes many gigabytes or terabytes in size?
The clue is in the name: it's Source Code Management System.
I can understand the simplicity of put eveything in, it's more likely that developer doesn't forget some important file. But if you're doing regular automated builds then surely that gets picked up anyway?
I think the key phrase is here:
It's so much more convenient than
having to think about good ignore
files
Are you explicitly forbiden from having good ignore files? My guess is that already you are excluding .exe and .class (or whatever) files. Suppose you did take the trouble to exclude your database would that be a problem? Why? It's a concious action that you are chosing to take for the commone good. In Eclipse it's a couple of seconds work to add a new file type to the workspace's CVS ignore rules for all projects.
A rule of "No Ignore Files" is almost self-evidently absurd. Once you have the freedom the have some ignore files then why not just use them intelligently to exclude the DB? Who is inconveninced? Only yourself, if anyone, and you're prepared to do the extra work.

Is there a way to automate importing projects into Eclipse?

For my current project, every time I set up a new workspace, I need to import hundreds of existing projects scattered in 20+ different directories. Is there a way to automate this step in Eclipse?
These projects are all checked into ClearCase.
This answer shows how to import an arbitrary set of projects into Eclipse using a custom plugin.
If I understand your question correctly, you would simply need to specify the paths of all the projects to import in the newprojects.txt file in the workspace root. You may want to remove the part that deletes existing projects though.
Could you import them all into a SCCS and then check them out all at once? You might try this as an experiment using cvs, not because you want to start using cvs in 2009, but because it has the best Eclipse support. If cvs can't do it, the others probably can't either.
For snapshot views, we have a "template" workspace which reference the .project and .classpath files in a "standard" way:
c:\ccviews\projectA\vob1\path\...
c:\ccviews\projectB\vob1\path\...
c:\ccviews\projectC\vob2\path\...
So by copying that workspace, we are able to quickly setup the projects for a new member of the team.
Each colleague will define their own snapshot views with:
a unique name (
colleague1_projectA_snap,
colleague1_projectB_snap,
...)
the same root directory for each view referring to a given project
(c:\ccviews\projectA for:
colleague1_projectA_snap or
colleague2_projectA_snap or
colleague3_projectA_snap...)
Since a snapshot view can be located anywhere you like on your disk, you can:
define a standard path
scale that to a large number of snapshot views.
Of course, that would not be possible with dynamic views, since their paths would by:
m:\aUniqueName\vob1\path
You could ask for each user to associate a view to a drive letter, but that do not scale for a large number of views.
Anyway, dynamic views are great for accessing and consulting data, not for compilation (the time needed to access any large jar or dll through the network is just too important)
Eclipse as the concept of project sets, but I'm pretty sure that's tied to using CVS. My team used this feature and it's how we shared the set of projects between us.
Another 2 alternatives I know of:
Buckminster
It's an Eclipse project which does component assembly, and one part of that is projects. Documentation was a bit crappy last time I played with it, but it does work. No idea if they have support for ClearCase, though it is extensible.
Jazz
Costs money and is also built on Eclipse. Covers similar ground to Buckminster but goes a whole lot further in team-orientated stuff.
I have created some scripts to do this for SVN. Currently, the scripts are run from Vagrant, but you could run them standalone. The process for clearcase should be similar.
See the answer here, which provides links to the source code: https://stackoverflow.com/a/21229397/1033422

Tool to list all source safe link files

My client is migrating from Source Safe to Clearcase. They need to list all the link files in the Source Safe database so the links can be carried over to Clearcase, as apparently all the source must be checked into Clearcase on day 1, losing any existing links.
Are there any tools for creating this report, or perhaps even doing the full import into clearcase ?
My plan is to write a powershell script to recurse Source Safe the SS folders, findings links using COM.
Thanks.
As I have mentioned in this question, clearexport_ssafe should be used for import from Source Safe to ClearCase.
However, the documentation for that tool explicitly mentions:
Shares. There is no feature in Rational ClearCase equivalent to a Visual SourceSafe share. clearexport_ssafe does not preserve shares as hard links during conversion. Instead, shares become separate elements
So your script would need to list all links, and create soft links between their initial directory and the newly created separate element.
But I believe you may want to consider another organization for the target ClearCase repository, one in which all share files are no longer directly used, as illustrated by this answer (for SVN repository in this instance):
We have eliminated all of our linked files. All class files that were previously linked have been placed into class libraries which are shared to our other projects as shared project references in the solution. So in essence you share libraries, not class files.
There was a bit of an adjustment process getting used to this, but I haven't missed links since then. It really does promote a better design practice by having your code setup like this.
I work mainly with UCM, and all those "share" are natural candidate for UCM component, with UCM baselines to refer to their different version, and you can then make your own "configuration" (list of labels) in order to select the different components you need, making them easily reusable across projects.
As VonC mentioned, the import from VSS to ClearCase is truly atrocious as:
The export/import takes forever to complete, so much so we open a PMR against IBM for it (that didn't help, btw)
The Source Safe shares are transformed into files, which is creating duplicates all over the place (the horror !).
I work on ClearCase UCM myself, and we took the same decision as you (which, in my 10 years of experience in CM, is ALWAYS the best decision): leave the history behind for reference and import at most a couple of versions one on top of the others, by hand (like current in development ; current in test ; current in live).
The way we solved the shares' problem is as follow:
The "shares" where isolated from the source-tree, to be imported independantly from the other sources
The other sources where imported (without the history and without the shares) from scratch. Let say in a component called MAIN_SRC
The shares where imported (without the history) from scratch. Let say in a component called SHARE_SRC
A project was created containing both components: MAIN___SRC, and SHARE_SRC.
Now, the problem is not solved because your shares are living aside your main source code, when your IDE (e.g. Visual Studio) fully expects them to be in the same folders they were before (i.e. in Visual all your projects become wrong if you don't solve this issue, and all the files would need to be relinked from within Visual itself, etc... A lot of work).
This is resolved by using ClearCase VOB symbolic links:
Let says in MAIN___SRC you need to use a file called myShared file in SHARE_SRC.
From within the folder needing to use the myShared file, use the command line interface and run:
cleartool ln -s ..\..\SHARE_SRC\(myPath)\mySharedFile .
You need as many ..\.. as necessary to go up to the component folder level in ClearCase, and then down following your path (myPath) in the SHARE_SRC component folder.
Remember the ClearCase path is composed of:
M:\View_name\VOB_name\Component_name\Your first level of files and folders
( VOB_name\Component_name is the "root" of the component, apart if you have single component VOB, in this case VOB_name\Component_name becomes just VOB_name)
The easiest way is to have a mapping of all the VOB symbolic links that need to be created, and put all necessary "cleartool ln -s" command lines in a script to run once.
After that, you should be fine, and your IDE think the sources are where they used to be.
Cheers,
Thomas

Version control of deliverables

We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
I'd probably take a look at rsync.
Just create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.
What rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.
Another option is unison
You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.
Obviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.
Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.
Create the usual repositories for
source files, resources,
documentation, etc for each project.
Create a repository for resources.
There will be the latest binary
versions for each project as well as
any required resources, files, etc.
Keep a good folder structure for
each project so developers can
"reference" the files directly.
Create a repository for final buidls
which will hold the actual stable
release. This will get the stable
files, done in an automatic way (if
possible) from the checked in
sources. This will hold the real
product, the real version for
integration testing and so on.
While far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the "real" versiĆ³n from latest source here.
What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.
We tend to use CVS id strings as a part of the what string.
const char cvsid[] = "#(#)INETOPS_filter_ip_$Revision: 1.9 $";
Entering the command
what filter_ip | grep INETOPS
returns
INETOPS_filter_ip_$Revision: 1.9 $
We do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.
HTH.
cheers,
Rob
Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.
You could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type "svn update" at the command line, or use TortoiseSVN: right click on the folder, click "SVN Update" and it'll update all the files and tell you what's changed.