While debugging I can step into external dependencies and our own nuget packages, if they were compiled to symbol packages (MyProject.snupkg) and uploaded to the nuget server. This is how we do it for our own nuget packages on our private nuget server.
I discovered the Source Link tool, whose readme states
Source Link enables a great source debugging experience for your users, by adding source control metadata to your built assets
What is the difference between that and the setup I've described above? And is there a benefit to using both at the same time?
(I've found existing SO questions for configuring Source Link, but none about the difference to symbol packages.)
I consider Source Link as some kind of evolution from symbol packages.
With the symbol packages, you always had two files:
MyPackage.nupkg (contains the assembly itself)
MyPackage.snupkg (contains the debug symbols)
So basically the debug symbols are shipped as a separate NuGet package (the snupkg file) and the relationship to the underlying source code is broken.
Microsoft lists some more constraints (see here).
Now since a lot of source code is open anyway (e. g. on GitHub), why not link the NuGet package directly to the sources? That's what Source Link does: it bridges the gap between the referenced assembly and its source code, without relying on an intermediate file (like snupkg).
This enables true deterministic builds, i. e. one can verify which Git commit was used for an assembly of a NuGet package.
I am developing a eclipse plugin which uses 3 native libraries, lets call them lib1.so, lib2.so and lib3.so. Dependency among these libraries goes like this lib1.so --> lib2.so --> lib3.so.
I can load lib1.so library using System.loadLibrary() API, but I am getting unsatisfied linker error for lib2.so and lib3.so. I resolved this issue by setting LD_LIBRARY_PATH environment variable to path where I kept all these 3 libraries.
I am not happy with this fix as I have to set LD_LIBRARY_PATH every time when I am importing these plugin projects in other machines :(
Is there any other way such that lib1.so can load lib2.so and lib2.so can lib3.so automatically. And these libraries always be kept together in single folder.
Is it possible to get Sping Data to scan separate modules for repositories.
I have created a repository in one maven module and wish to access it from another on which it has a dependency.
However I cannot figure out the configuration to tell it to scan in multiple modules/jar files.
In the logs I am seeing multiple references to scanning "core-engine", where the repository that I require is sitting in "test-model"
DEBUG main - core.io.support.PathMatchingResourcePatternResolver - Searching directory
[<path>\engine\core-engine\target\test-classes\] for files matching pattern
[<path>/engine/core-engine/target/test-classes/**/model/repository/**/*.class]
The project has a number of modules but there are only 2 that should have an impact in this case and they are "core-engine" and "test-model".
"test-model" contains all of the configuration i.e. the repository definitions, the entities and the repository interfaces.
"core-engine" has a dependency on "test-model".
I am using SpringRunner to run my tests and have tried referring to the ContextConfiguration in "test-model" itself or indirectly by importing the repository config xml into a separate "core-engine" config to no avail.
I have tests running within the "test-model" module which use the repositories, my problem is just getting access to these repositories from "core-engine".
--> test-model (maven module)
---->src/main/java
------>com.test.model.domain (various simple Entities)
------>com.test.model.repository (the repository interfaces)
---->src/main/resources
------>META-INF/datapump/dao-jpa-repository.xml
---->src/test/java
------>com.test.model.domin (various simple CRUD tests using the repositories)
---->src/test/resources
------>META-INF/test-context.xml (defines the Application context and imports dao-jpa-respoitory)
dao-jpa-repository.xml contains line which is found and testable within the test-model module
core-engine has a dependency on test-model.
--> core-engine (maven module)
---->src/main/java
------>com.test.model.inject (classes which attempt to use the repositories defined in test-model)
---->src/test/java
------>com.test.model.inject (tests for the above classes)
---->src/test/resources
------>META-INF/test-context.xml (defines the Application context and also imports dao-jpa-respository from the test-model)
From above I have a test in the core-engine that tries to persist an entity from the test-model using its repository. However I cannot get access to the repository (through autowiring or by manually looking it up) as it appears that the repository is not in the context.
If anyone could help I'd appreciate it.
Cheers
UPDATE:
So it turns out that we may have found a bug in Visual Studio 2003 (I know...no surprise). We found out that if the solutions were added to the repository using Visual Studio (using Add Solution to Source Control) everything went fine...go figure.
So we're converting our VSS repository (if it can be called that) to Perforce and we're running into issues with projects included in multiple solutions.
Our repository might look like this...
//Depot
DevMain
Solution1
Project1 (Builds to a DLL)
Solution2 (Has Project1 as a project reference)
Project2
Solution3 (Has Project1 as a project reference)
Project3
When using the integrated Source Control in Visual Studio for Solution2 it complains that the projects are not under the current solutions folder and we may want to move it. Because multiple solutions reference Project 1 we can never organize it where one solution won't complain...
Is the best practice to just build Project1 to DLL and store it in a Lib folder? Or is there a better way?
Thanks.
We had a similar problem and were approaching it very much like #Toby Allen mentions in his answer, via client specs. However, in time this becomes very painful (setting up a new team member becomes more and more difficult as client specs become more and more convoluted; also automation is much more complicated because things are... "in flux" :-) ).
Eventually, we evolved our strategy to use a directory structure and branching instead. The directory structure is as follows:
//depot
/products
/(product_name)
/doc
/lib
/(3rd_party_libname)
(DLLs)
/(another_3rd_party_libname)
(DLLs)
/src
/Project1
(files, csproj, vbproj, etc)
/Project2
(files, csproj, vbproj, etc)
/Project3
(files, csproj, vbproj, etc)
Solution1.sln (includes src/Project1)
Solution2.sln (includes src/Project1, src/Project2)
Solution3.sln (includes src/Project1, src/Project3)
/(another_product_name)
/doc
/lib
/src
(solutions)
/shared
/(shared_lib_name)
/doc
/lib
/src
(solutions)
/(another_shared_lib_name)
/doc
/lib
/src
(solutions)
Note that the same structure is repeated throughout the structure (doc/lib/src/solutions). Lib contains "external" libraries - 3rd party libraries that are included in project references. Src contains a flat list of all projects that are part of a particular product. Solutions are then used to "combine" projects in any number of ways. I think of src directory as a container with "what is available", solutions are then picking from this container and combining projects (as needed).
Libraries that are shared among multiple products go into shared directory. Once in shared directory, they are treated as independent from products - they have their own release cycle and are never joined to products as source. Shared libraries are pulled into products by branching the shared library release assembly/assemblies into product's lib directory -> from product's perspective there is no difference between a 3rd party library and a shared library. This allows us to control what product is using what version of a shared library (when a product wants new features, it has to explicitely branch in newer release of a shared library, just like it would include a new release of a 3rd party library, with all pros and cons that go with it).
In summary, our structure has concept of two "types" of shared libraries:
projects local to a product, used by multiple solutions (included in a flat list of projects in src directory, multiple solutions can reference them)
projects used by multiple products (added to shared directory, treated as 3rd party libraries with releases independant from products)
The solution should be to rebind Solution1 to the source control server each time it's added to a new project. (It's under File->Source Control->Change Source Control.)
This should only need to be done once per desktop per solution.
I don't know much about using Visual Studio with Perforce, but you might consider creating workspace views that map Project1 into Solution2, Solution3, etc., where needed.
If I understand correctly you want how your files are stored on disk to differ from how it is stored in Perforce. If this is the case, and you cant simply redefine the reference within VS, then a cleverly designed client can do the trick.
Client: Client_Solution2
Description:
Created by Me.
Root: C:/Development/Solutions
View:
//depot/Devmain/Solution1/... //Client_Solution2/Solution2/Solution1/...
//depot/Devmain/Solution2/... //Client_Solution2/Solution2/...
This will give you a structure where Solution1 is a sub directory of solution 2.
How do you store third party libraries that you use in your project in your source control?
When would you store binaries in your source control?
When would you store the code in your source control?
Would you ever store both? In what situations would you do this?
(Btw, I'm using .NET but it doesn't really matter for this question)
How: a vendor branch is generally a good approach
When (thirdparties): for minimizing the number of referential involved: you could add those libraries into a separate external referential (like Maven), but that mean you need to access that extra referential for each of your environment (development - integration - homologation - preproduction - production)
When (code): for managing the complexity of the changes, when you know updates and fixes for current versions running into production will be needed while new development are in progress.
Why (store both): for deployment reason: you can manage a complete configuration (list of elements you need) in one referential and query it wherever and whenever you need for:
development (you query what you need to develop and execute your code, including thirdparties needed for compilation/execution)
tests (integration, homologation): you query the exacts tags you want to update your testing workspace with
production: you identify exactly what goes into production from one source: your SCM.
For test and production environments, that also mean your own product (the packaged result of what you are building) should also go into your SCM (only the official releases, not the intermediate ones used internally).
If other projects depend on your product, they will build their own project against your packaged version stored in the SCM, not against your source code they somehow recompiled.
Why this is important ?
Because in the end, what will run in production is that packaged version of your product, not your "source code re-compiled". Hence the importance to make all your test with the target final form of your product, clearly stored and tagged in your SCM.
Martin Lazar raises a legitimate point in his answer
Source control is called "source" control, because it is supposed to control sources.
While that may have been historically true, every current RCS have evolved toward SCM (Source Code Management), which does not just control sources, but also manages changes to documents, programs, and other information stored as computer files.
Binaries can then been stored (even stored with binary-delta)
Plus that allows some of those SCM to propose S"C"M feature (as in Source Configuration Management).
That SCM (Configuration) not only stores any kind of "set of files", but also their relationships (aka dependencies) between those sets, in order for you to query one set of file, and to "pull" every other deliveries on which that set depends on (to build or to deploy or to run)
How do you store third party libraries that you use in your project in your source control?
As binary or source or both. Depends on the library.
When would you store binaries in your source control?
A third party library for which we don't have the source or an internal library which we haven't made any changes to or the library is too huge to be built around.
When would you store the code in your source control?
Say, we use an internal library A but have made some bug fixed specific to product X. Then product X's depot will keep the source and build it.
Would you ever store both? In what situations would you do this?
Yes, all the latest binaries are stored in source control.
So the tree looks like this:
product
|-- src
|-- build
|-- lib
|- 3rdparty
|- internal
...
Assuming you are using .Net:
I create a "Libraries" folder in my project and source control that contains any third party assemblies.
My solution then references those assemblies and my build process pulls that folder down to our build server.
Any one pulling your code from source control should be able to compile it without having to hunt down references and assemblies.
Source control is called "source" control, because it is supposed to control sources..
In Java it's common pattern to use some version control system to store sources and other resources like configuration XML files or pictures, and than to use some dependency management tool like Apache Maven, which will store, download and manage your project's dependencies on 3rd party libraries. Then when you reinstall your OS, Maven can automatically download your dependecies from central repository (or your own private repositories as well) and store them in a local cache on your disk. You don't even have to know where the cache is :)
Maven can be also used with other languages and as far as I know plugins for .net and C/C++ are available, but I haven't used it with anything else than Java.
I don't put 3rd party source or binaries in SC. My rationale was that I would not have to update SC just to update the libraries. I am starting to regret this though. When I have had to recreate the project I find myself running around looking for the lib sources instead of just syncing to SC.
On a recent Java project I switched to using Maven - this was quite nice since it meant I didn't need to store any third party jars in a lib/ directory. At compile time maven would pull in the dependencies. One nice side affect is the jars have the version number in their filename.
My experience has been to create a "lib" folder and keep all 3rd party binaries in there. I will create a totally separate tree for the Source Code to these third parties if it is available.
Some places where this might be different is if you are using an open source vs. a retail 3rd party, with open source solutions I tend to just include the code in my projects and not check-in the binaries.
You don't need to store third party libraries in your source control repository. Those libraries (think of SDL, libcurl, etc.) should always be available on the web.
Just two raccomandations:
make sure to state clearly in your code which version of the library you should compile against
be sure that that specific version is always available on the web
Generally speaking, I would do roughly what has been prescribed by other users.
In the case of Subversion, and I admit I can't speak to the inclusion of the feature in the case of other systems, one can use an External to link in a repository maintained elsewhere. In a checkout/export, you'll get a full copy of the source including the updated code from that external repository all in one go.
This was particularly nice for me with PHP/JS development as there is no issue regarding storing binaries. I'd keep an external of the Zend Framework, Doctrine ORM, and jQuery all in a /lib/ folder of the project. Every update would give me a complete, updated copy of -all- the necessary files without any more work than adding that repository URL to an svn property.
If you are using git (which I recommend), and you have the source of the third party library, then storing the library in its own git repository and including it as a submodule is an excellent setup.
http://book.git-scm.com/5_submodules.html
http://git-scm.com/docs/git-submodule
If the source library is also using git then you can clone their library and push it to your own server so you'll never lose it.
Git submodules allow you to specify which revision of a library is required for a project, which is great for maintaining compatibility.
Conceptually you need to store at least the binaries (and the headers if you do C/C++)
That's usually the only way with third party libraries where you don't have the source.
If you have the source you can opt to store the source and build the third party libraries in your build process.
You should be able to install a fresh OS, get your sources from source control, built and run. So yes, you should put them in source control.
It depends on how big they are. When binaries or installers are too big, it can cause havoc for remote users. The upside of storing binaries and installers is that everything a developer needs to get up and running is in source control and the versions are correct. If you have a separate installation location, versions can get messed up. So, in general I like to store small or moderate binaries in source control, but larger ones I leave out.
Edit: Oh, and I call mine "BinRef" :)
When would you store binaries in your source control?
I store binaries in source control when I want they ability to quickly revert back to the old version of an application. There are number of reason I would do this, those include the following:
Application testing in an environment that exactly matches production is not always possible.
The previous programmer may not have used source control or may have used it incorrectly. So it can be difficult to be sure the source code you are changing is the version that matches what the user is working with.
On source code and compiled libraries (most likely standalone):
If I don't use the third party components (compiled libraries) or provided source to build inhouse software components, I simply take them off the shelf and install them as prescribed (which might include a compile, pl/sql code for example). I would not install them in a dependency management repo or any source control system for the simple reason that I don't want them incorporated accidently or otherwise in components I'm building and I don't want to track them in any software cycle. This is an asset (software asset) that should be tracked with other tools. If I'm not using them for software development, I don't and shouldn't see them in tools I use for software development.
If I depend on them for building my own software, I would store them all day long.