NServiceBus Command location - command

All,
A quick if you will related to the location of commands. We have two hosts, the first which will issue commands, the second which will receive those commands.
The hosts exists in different eco systems/bounded contexts, and therefore I'm trying to determine the best location for the commands.
Do you think that the commands project should reside with the send (in the sender sln), or with the receiver.
They could be kept entirely independent and be in a separate solution, but that doesn't solve the location issue as they're hosted in an internal nuget instance.
Thoughts?

With either commands or events we tend to place those outside of the consuming projects in a common area and build them separately after initial development. We have the build generate the nuget packages and then reference those from the consuming projects. Enabling package restore ensures the consumer's builds work correctly.

As Adam stated, Messages (Commands and Events) are contracts and should be located in a common project, the two consuming project have a dependency on the messages the sen/publish and handle. you can put the messages in separate projects (and/or namespaces) based on the service that owns them.

Related

Can 2 different Java programs running in 2 different Eclipse Workspaces, read each other?

I have one common app (Let's call CommmonApp) which needs to be up continuously for other specific (Let say one of them is - SpecificApp) app's' to run.
I need to switch between different git repositories during development and have to perform series of steps such as Clean Build and Maven Update to have the workspace in runnable/clean state and then also every time I have to start CommonApp in all those repository workspaces.
If I run CommonApp from some other repository in Eclipse 1 and run SpecificApp from another repository in Eclipse 2, SpecifcApp is not able to read CommonApp.
The medium of communication between these apps is REST APIs, so I assumed that it will work but it is not.
Is it possible or too wishful? Hope I haven't confused.

Plugin Architecture - One host with many Pipeline folders

We currently have an application that is a plugin host, thus having the "Pipeline" folder in it's application directory. All of the plugins that are managed through this host are plugins relating to a windows service that is running, and that windows service is basically for managing one county for the purpose of this example.
What we want to achieve is to be able to install multiple instances of this windows service and to manage each of these through the host application. Our original thought was to have several "Pipeline" folders, one for each county which manages it's instance of the windows service but I don't see how we are going to do this since it seems like the "Pipeline" folder naming convention is set in stone and there is no way to dynamically point your application to a specific "Pipeline" folder.
Any thoughts?
Seems like I always dig up the answer after posting...
There is a parameter on the FindAddIns method used to pass the pipeline root. This should work just fine.

NuGet error in TeamCity: The process cannot access the file because it is being used by another process

We're using TeamCity (9.0) as our CI server to build, test and deploy several applications. Recently we are seeing occassional (one in every 30/40 builds or so) NuGet (2.8.3) errors as follows:
[restore] The process cannot access the file 'C:\BuildAgent\work\e32cbd0940f38bf.....\packages\Newtonsoft.Json.5.0.6\Newtonsoft.Json.5.0.6.nupkg' because it is being used by another process.
where the actual package seems to differ from time to time.
We suspected it has something to do with the same package being referenced in multiple projects within the same solution, but I would expect NuGet to be able to handle this correctly by filtering out duplicates instead of attempting to retrieve the same package multiple times, thereby ending up with write-locks when restoring the packages to the work folder.
As a first step of each Build Configuration we have a 'NuGet Installer' step set to 'restore'. I've tried fiddling with its settings (different 'Update modes', '-NoCache', older NuGet version (2.8.0)), but to no avail.
Has anyone else experienced similar issues, and if so, has any suggestions on how to ensure this error does not occur.
Any help would be greatly appreciated!
I had the same issue with Jenkins and fixed that by adding "-DisableParallelProcessing" to the nuget restore command, the final command would look like:
nuget restore "%WORKSPACE%\Solutions\App\App.sln" -DisableParallelProcessing
Excluding NuGet package files from our anti-malware products resolved this issue for us.
I used the SysInternals Process Explorer utility on the build agents to search for file handles for any *.nupkg files while the builds were running. After several builds I observed the anti-malware products briefly locking these files during the NuGet restore operations. Adding an exclusion to the anti-malware scanning rules prevented these locks as the files were no longer being scanned.
In our environment we use two different anti-malware products on different build agent servers. We encountered this issue with both products.
As far as the error message is concerned, I also came across it.
I debugged the “nuget restore” process, breaking at the point where the .nupkg is copied to the local repository, and then freezing the thread while the file was opened for writing. And sure enough I got the exception in another task, due to the fact that the two packages had Ids where one was a prefix of the other. I filed an issue for this : https://nuget.codeplex.com/workitem/4465.
However, this is probably not exactly your problem, since the error in my case is on reading the .nupkg of the package with the “long” name, and I don’t think there is a package with an Id that is a prefix of NewtonSoft.Json (whereas it is very possible the other way around : there are for instance NewtonSoft.JsonResult of NewtonSoft.Json.Glimpse).
I installed new Newtonsoft.Json and problem disappear
You can turn on build feature Swabra with option "Locking processes" (requires handle.exe). And check are there any files locked after build's finish or not.
If there are no locked files then try to run Nuget using command line build step instead of NuGet Installer. If the issue is reproduced then most probably it means that the issue is related NuGet.

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.

Sharing a fabfile across multiple projects

Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself:
copying the fabfile.py from one Django project to another and
modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.).
One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by:
being able to pip install the common tasks defined in the fabfile.py and
having a fab_config file containing the host configuration information for each project and overriding any tasks as needed
Any recommendations on how to increase the DRYness of my Fabric workflow?
I've done some work in this direction with class-based "server definitions" that include connection info and can override methods to do specific tasks in a different way. Then my stock fabfile.py (which never changes) just calls the right method on the server definition object.