I have downloaded CommonDomain from GitHub and I can see a directory
/ src / tests / CommonDomain.AcceptanceTests
which contains the file "When_an_aggregate_is_persisted.cs" but there is no project that uses it. Also the base class "in_the_event_store" seems to be missing. There are also references to FakeBus, IDomainEvent, SynchronousDispatcher etc.
CommonDomain is not under any kind of test. There are some old files that were under some kind of test at one point but the test files were removed from the solution at one point. The CommonDomain project was originally spike code (proof of concept) that worked a little bit too well and made it into production. Interestingly enough, there are actually quite a few users that use it now because it's extremely lightweight and tries to be as unobtrusive as possible.
The future of this project is to merge its essence into the EventStore project because the are two sides of the same coin. But the new iteration will be under a full suite of tests.
Related
Using Matlab for development and Mercurial for version-control, how do I properly version all code for each of my projects, when they share some common classes and functions?
My current scheme addresses this imperfectly; I have a repository for each project and a repository for the common library. This necessitates a manual manipulation protocol, including:
Manually referencing the project name/version in the commit description for the library
Manually updating the changeset for both the project and library, if reverting to a previous state
This has worked reasonably well so far, but does run the risk of human error in following the protocol and unintended consequences of a library modification on another project. The latter can be addressed with hg update -r on the library, but is error-prone since I have to remember to go back, as I move between projects.
Searching here (and elsewhere), I thought I had found salvation in sub-repository branches, only to discover the practice is basically frowned upon and considered a feature of last resort.
I then found that some folks eschew direct versioning with the project in favor of treating the library as a package for the build software to manage. Taking the library off the Matlab path, creating version clones and telling the builder which one to use, for any particular project, is a brilliant idea - except that I also use the Matlab interpreter to run/debug my code, as well as use the library in various scripts, so I need the library on the Matlab path - which means the builder will automatically pick up the library version that's on the path.
The only other scheme I can think of is to copy the library dependencies into the Project folder for revision control by the project repository. A change protocol would have to include copying the affected library class/function back to the library folder and typing an appropriate commit message. The trick here would be in manually updating project copies of library files, unless there's a Mercurial command to selectively pull from a foreign repo.
Does anybody have a better, more robust way to manage shared library code among projects in both interpreter and build environments?
Thanks to everyone who commented and to those who took the time to read my question. I am loathe to ask questions, since I never think my queries are so novel as to be previously un-asked. But in this case, I was finding it hard to come up with the right search terms/phrase; hence the less-than concise phrasing of my question.
I still don't know if there's a standard approach to managing software configuration for a project, when it includes non-project-specific dependencies, but the scheme I've decided to adopt is outlined below. I should say that the development framework I'm using is Matlab, which may well be argued isn't a terribly good framework for developing a GUI application, but it's the only one I have for now. Should I move to .Net, or some other framework, then maybe some of the issues I'm having would be much more readily resolved.
I decided the ability to version a project in its entirety took precedence and so I copied all of the project-agnostic dependencies (that is, functions and classes that I've developed) from a central library repo to a folder within the project repo.
It just means I have to be disciplined in managing the Matlab search path, as well as the protocol for copying changes made to these dependencies back to the central library - and for polling the library for any changes that originated from another project.
This doesn't seem elegant, but it does make me think more carefully about the functions and interfaces that I put into the library, which should be a good thing.
https://github.com/MobileChromeApps/mobile-chrome-apps allows Chrome Apps to work on mobile.
Their getting started wiki is really good to get things working but it generates a lot of files with absolute paths. Nothing is said about which files to keep under source control.
At the moment I'm using each build: $ cca create YourApp --link-to=path/to/manifest.json which seems just wrong (for example the config.xml is lost).
TLDR; www/ is by far the most important. For the rest, just control what you edit, and trust that cca create --link-to= will re-create the project in a good state.
The files generated during cca create fall into two main buckets:
Your application; This obviously includes everything in the www/ folder, but also config.xml, merges/ (optional), and hooks/ (optional).
cordova/cca build artifacts; This includes plaforms/ and plugins/, and, well, anything else :)
Absolutely you should version control #1. Many developers don't actually use merges/ or hooks/ (at least at first), and config.xml is actually auto-generated during cca create using values from your www/manifest.json, so its fine to not version it unless you made manual edits. We realize its common to add <preference>'s there, so we are working on adding support for importing merges/ hooks/ and config.xml using --link-to=path/to/config.xml. Sorry if you need this feature today, please follow this issue to find out when it is resolves in cca.
As for #2, that depends on your preference. If you are making edits directly to the native bits of the platforms, then you should absolutely add those to version control. Or, if you want 100% control over how those bits evolve and you are 100% happy with the way the projects are working for you today, then sure, add them to version control.
However, we (cca and cordova developers) are constantly fixing, evolving, and improving platforms/ and plugins/, and by far the easiest way to "upgrade" your project right now is to just re-create it. We try very hard to be backwards compatible (and yell loud when we aren't), so you should have considerable confidence that a project today will work at least as well created next week.
Personally, I keep only #1 in version control, and re-create projects often (whenever the tools update, hey its quick!). Its not been an issue yet. I think the cca create --link-to=path-to-app syntax really helps here, and we are considering adding support for a cca update to make this even easier, eventually.
Finally, one developer working with cca has blogged about his experience, and one of the topics he covers is what to check in. He came to the same conclusion as we suggest.
Good Luck!
Our sw project uses for its build process different libs (popular as well as special ones) and a framework. The libs never change, whereas the framework could be changed from time to time to an updated version.
For extended further developing we want to use a version control system. Which of these solutions is the most elegant one:
The full project with all libs and the framework gets uploaded in the version control system's repository thus everyone has exactly the same files, but the use of space in the repository is enormous.
Only the artifacts of the project which are getting effectively changed over time (the main program) are in the repository. Used libs and the framework are stored on a central NAS. -> Files could be used for other projects, too.
Like 2, but everyone hast a copy of the libs and the framework on his local workspace.
For my taste solution 2 or 3 sound better, because I think that the repository should be as light as possible. What are you recommending?
This is obviously a matter of opinion, but my opinion is that the most important characteristic of version control is the ability to reproduce source at a particular point. That includes libraries. There are downsides (boost is huge, for example), but it guarantees that everyone gets the same code, especially in the case of obsolete or unsupported libraries.
You can have both; structure your source control so that it separates your source and your lib/framework. People can put them in different places locally if they so choose, but everybody will have the same codebases.
Disks are cheap; time wasted trying to figure out why people aren't all seeing the same thing isn't. The most important thing is that everyone stay in synch.
A bit of context: I am practicing with the former editions of the Google Code Jam, and trying to solve a lot of these puzzles in Java. For each puzzle I create a specific project in Eclipse.
I also built a little "Sample" solver project containing usual operations on input/output files, handling of test cases, script files to run the program on a file quickly, and so on. Now I am using this framework on every puzzle, simply modifying a core "Solver" class which contain the main algorithm. All other files stay the same on every project.
My problem is, I am versioning my work but clearly the only relevant source code to version for each project is this Solver class (and some input/output files). All the rest is duplicated and I would like it to be easily updated when I modify something in the sample project.
On the other hand, I want to be able to easily checkout a project and get it fully working.
I was thinking of using SVN externals to do this but external definitions apply only to subdirectories - and my relevant files are in the same folders as the duplicated ones.
Also SVN ignore does not fulfill my purposes because I would still have to manually replicate any change to my sample project throughout each project.
Do you know of a good way to handle this? Thanks!
Code reuse is typically not accomplished using the version control system, but using polymorphism or libraries. One disadvantage of using the version control system is that you have to do a svn update to pull the new externals from the repository, which strikes me as awkward if you have many projects checked out. Another thing to consider is the development workflow when modifying the reused code. To test your changes, you will probably want to run them with a particular solver, but to do that, you need to svn update - and I am pretty sure you will forget to every once in while, and wonder why your bugfix has no effect ... Therefore I recommend one of the following two approaches:
Polymorphism
Put all your solvers in the same project, making reuse rather trivial. To invoke the right solver, you could do something like:
interface Solver {
// your methods
}
class Ex1Solver implements Solver {
// your solution
}
public static void main(String[] args) throws Exception {
Solver solver = (Solver) Class.forName(args[0]).newInstance();
// work with solver
}
Library
Define an eclipse project for the reused test harness, and a project for each solution. Declare the reused project as dependency of the solution project (In eclipse, right click on the project -> build path -> configure build path -> projects -> add). The test harness would create the solver in the same way as in the polymorphism solution.
You can use svn:externals with files (starting with 1.6) as well, but i would think about a solution on base of a library, cause it's sound like your "framework" is such a kind of thing.
Following is a question that is posted on http://dev.eclipse.org in April 2003. The original question is:
Hi all,
in eclipse i have created several java
projects representing different
modules for one web application. i'd
like to configure one output folder
for all of these projects. Any time i
build a subproject the content of the
output folder is deleted, so i loose
the classes of all other subprojects.
I think there must be a switch or
something like that to tell eclipse
not to clear the content of the output
folder when it builds a project - but
i just can't find it.
Thanks for your help!
Alex
I am trying to see if I could get a definitve answer for this question. I have tried to find out to see if this question has already been addressed and I was not able to find any except for the following answer:
Window-->Preferences-->Java-->Compiler-->Build Path
The above answer did not help me much.
Hmm... I think this approach will bring more trouble than it's worth. Sure it's a priori a quick and dirty fix to integrating your projects together but you are only pushing the problem forward. It is good practice to keep your modules as isolated as possible from each-other, trying to merge the compiled code in a single location is working against the way the IDE was designed and will only bring trouble.
I would recommend that you look into maven to build and package your modules. Then referencing them is just a matter of adding a declaration in the project that requires it and you are integrated. Of course you will need to learn it but it provides a good base of conventions that when followed yield almost effortless integration. Plus reusing some modules in another project becomes trivial so you gain in all fronts.
To answer the other question in the thread when they wish to make a tree of related projects it is possible though somewhat clumsy. Eclipse will always present projects as a flat list, however the folders can be arranged in a tree nonetheless. Just specify a custom location when creating a project or import the project from the sub-folder. Again here Maven can help a lot with it's concept of modules.
As eugener mentioned in his comment, there are plugins for maven that will make most of these tasks trivial. You may find all you are looking for just by exploring the gui, this said, reading the maven literature will give you good insight on how it works and what it can do for you.