Using ClickOnce for multiple deployment configurations - deployment

I have a ClickOnce deployment that has different web service endpoints and strings that need to be changed in Settings.Settings. Right now I am only having to deal with on localized development version being done in house and one version that i push out to the customer for their UAT. Now i need 4 versions of this application. in house dev and testing, customer testing and production. I also need these 4 deployments to be able to be installed along side each other. I have discovered that i can change the name (i.e. APP -- INTERNAL -- TEST, APP -- INTERNAL -- DEV, APP -- CUST -- TEST, APP -- CUST -- PROD) and that will allow them all to be installed alongside each other. But, having to remember every place a string needs changed in the various settings.setting of each build, swapping the end points, changing the application names, changing the certificate, changing the deploy addreess and the url for each different build is time consuming and cumbersome. Is there a way to just say "Publish internal test build" and have it do the right thing? I was going to just write various mage scripts but I dont thing that gets me around having to mess with the settings.settings stuff. i didnt write this application nor maintain it but I suppose i could go in and use some sort of conditional logic, but the connections strings for instance are wired to reports and table adapter etc... P.S. I hate ClickOnce

Ok, for a useful answer and not a critique of my writing style. mage.exe is severly lacking in options on what it can an cannot do, it is also poorly documented and does not work as advertised. In order to accomplish what I wanted, I had to download sed for windows and write .bat files to manually rename files to .deploy. I used sed to edit the manifest files and flip options on and off and keep track of the different deployments. So in short write a batch file using mage.exe and sed and have a very good understanding of the contents of a manifest file. Feel free to contact me and I can send scripts that will automate multiple ClickOnce deployments, add the .deploy extension, require a specific version number before start up etc... none of these are possible using the tools MSFT provides.

Related

Flyway Oracle Deployment

I just started working on a new project. We are building a new application from scratch. Team started with a brand new schema. I wanted to automate the database build process, so I started looking for the options. Flyway seems to be a good one. I have been playing around a bit and found some limitations of the tool. Perhaps, someone will be able to help.
We have the following directory structure for SQL files:
SQL
-- DDL
-- DML
-- PACKAGES
We are doing agile development, so file names are based on the sprint number. The file naming convention we are using is:
Sprint#_script#_userstory#_description
For example:
S1_01_US123_CreateNewTable.sql
S1_02_US123_AddConstraint.sql
Next sprint:
S2_01_US456_AddColumn.sql
And so on...
I setup the JDBC parameter and I am able to connect. I tested basic things like: clean, repair, info and migrate with couple of test scripts and that worked like a charm. I started to run into issues when I tried deploying all the scripts. Issues like:
- It didn't like single underscore.
- It didn't like the file names starting with S1_01_*, rest of the file name is different and they are in different folders.
I have the following questions:
Can I build using Flyway without having to rename the files?
How can I get it to deploy in this order:
DDLs
DMLs
Packages (everytime I deploy). And we have a separate header and body files, so deploy header first as well.
Can I change the structure of schema_version table?
Can I do selective clean? Like flag some of the objects to not to be dropped?
My main concern is running DDLs before everything else. If I can accomplish that, then I can start using Flyway and learn as I go.
Thanks in advance.
Harbinder
Can I build using Flyway without having to rename the files?
Maybe. Experiment with the flyway.sqlMigrationSeparator property. Try "_US" which will break after the script number. You'll also need to set flyway.sqlMigrationPrefix=S.
How can I get it to deploy in this order: DDLs, DMLs, Packages (everytime I deploy). And we have a separate header and body files, so deploy header first as well.
Specify multiple locations (separated with comma) and ensure the version numbering ordering makes sense as if these files were all in the same directory. If running from the command line, turn on debug with -X to see how flyway collects the migrations.
Additionally, if possible, you should consider renaming your packages as a Repeatable migration (default:R) so that you just need to change the contents of the file for flyway to pick it up.
Can I change the structure of schema_version table?
No. This is managed by flyway.
Can I do selective clean? Like flag some of the objects to not to be dropped?
No. In this situation it might be best to set flyway.cleanDisabled=true to stop accidental mistakes. There are callbacks before and after clean if you wish to do extra cleaning but I don't think you can restrict clean itself without delving into the code.
Good luck!

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.

How to create own server profile in JBoss AS 5 and 6

I am using JBoss AS 5 + 6 as an application server, however only as a simple EJB3/Web container with ear and war deployment but without special capabilities such as clustering, ejb2 or hornetq.
JBoss AS provides server profiles for different uses but I did not find any documentation on how to create my own or customize an existing profile. How can this be achieved? And where is it documented on the internet?
If you want create your own profile you have to create your own profile, which can base on one of the standard JBoss profiles: minimal or default (if you want clustering you can also use all or production profile).
If you choose minimal profile you have to copy necessary services to it (for example from default profile). You have to remember about config files, deployers and so on.
If you choose default profile you have simple remove unnecessary services.
In my opinion it is much easier to remove some services.
And the most important point: there is JBoss documentation what you have to remove from profile to disable given service: JBoss 5.x Tuning/Slimming.
I haven't seen any documentation on this, because I'm not sure it's something you're really supposed to do.
Having said that, I've been doing it for years, and it works great for me :)
It's a bit of a hit-and-miss task, though. You need to go through the deploy and deployers directories, removing any services or deployers that you don't need. You'll find that they have inter-dependencies on one another, though, and it's not always obvious what depends on what.
Take it one at a time - start with an existing profile (e.g. default), copy it (e.g. to myprofile), then start by removing one thing you don't need (e.g. the deploy/messaging directory), then start it up with that profile (i.e. run.bat -c myprofile), and see if it starts up OK. Try this with each service you want to remove. If you removing something it needs, it'll complain, and tell you what depends on it.

How do you deploy a website and database project using TFS 2010?

I've been trying to figure this out and so far haven't found a simple solution. Is it really that hard to deploy a database project (and a web site) using TFS 2010 as part of the build process?
I've found one example that involved lots of complicated checks and editing the workflow (which is a giant workflow btw).
I've even purchased the book "professional application lifecycle management with VS 2010", but apparently professionals don't deploy their applications since it isn't even mentioned in the book.
I know I'm retarded when it comes to TFS, but it seems like there should be any easy way to do this. Is there?
I can't speak for the database portion, but I just went through this on the web portion, the magic part is not very well documented component, namely the MSBuild Parameters.
In your build definition:
Process on the Left
Required > Items to Build > Configurations to Build
Edit, add a new one, for this example
Configuration: Dev (I cover how to create a configuration below)
Platform: Any CPU
Advanced > MSBuild Process
Use the following arguments (at least for me, your publish method may vary).
MsBuild Params:
/p:MSDeployServiceURL="http://myserver"
/p:MSDeployPublishMethod=RemoteAgent
/p:DeployOnBuild=True
/p:DeployTarget=MsDeployPublish
/p:CreatePackageOnPublish=True
/p:username=aduser
/p:password=adpassword
Requirements:
You need to install the MS Deploy Remote Agent Service on the destination web server, MSDeploy needs to be on the Build/Deployer server as well, but this should be the case by default.
The account you use in the params above needs admin access, at least to IIS...I'm not sure what the minimum permission requirements are.
You configure which WebSite/Virtual Directory the site goes to in the Web project you're deploying. Personally I have a build configuration for each environment, this makes the builds very easy to handle and organize. For example we have Release, Debug and Dev (there are more but for this example that's it). Only the Web project has a Dev configuration.
To do this, right click the solution, Configuration Manager..., On the web project click the configuration drop down, click New.... Give it a name, "Dev" for this example, copy settings from debug or release, whatever matches closest to what your deployment server environment should be. Make sure "Create new solution configurations" is checked, it is by default. After creating this, change the configuration dropdown on the solution to the new Dev one, and Any CPU...make sure your projects are all correct, I had some flipping to x86 and x64 randomly, not sure of the exact cause of that).
In your web project, right click, properties. On the left, click Package/Publish Web (you'll also want to mess with the other Package/Publish SQL tab, but I can't speak to that). In the options on the right click Create deployment package as a zip file. The default location is fine, the next textbox I didn't find documented anywhere. The format is this: WebSite/Virtual Directory, so if you have a site called "BuildSite" in IIS with no virtual directory (app == site root), you would have BuildSite only in this box. If it was in a virtual directory, you might have Default Web Site/BuildVirtualDirectory.
After you set all that, make sure to check-in the solution and web project so the build server has the configuration changes you made, then kick off a build :)
If you have more questions, I recommend you watch this video by Vishal Joshi, specifically around 22 and 59 minutes in, he covers the database portion as well...but I have no actual experience trying it since we're on top of a non MSSQL database.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though