Deployments for multiple environment in jenkins - deployment

I want to use jenkins to deploy various WARs using our single script for multiple servers.
Could you please suggest how to pass servers name to a job, so that our script can take that as an argument and start deploying on the selected server? The solution will be used to deploy the same code to 10-20 servers using our customized ant script to build these projects.
EDIT: We are using AIX servers. Want to use a drop down menu from which user can select environment IP,Port. How should I approach this?:
Maintaining txt files of environments
Using choice parameter
On selection of this env, we will use this env variable in our shell script to deploy.

To have one job start another, just use the parametrized trigger plugin. In addition, I like to run the deployment jobs on the target machine. For this I defined a slave for every target server. To be able to run a job on a specific slave and be able to choose the slave as a parameter, I use the NodeLabel Parameter Plugin.
If you want more specific tips, be more specific on what application servers you use. It would also be interesting to know if you operate under windows, linux, or other environment. The more info you give the better and more fitting the answers.

Related

Visual Studio Online / Azure stopping and starting web applications using Powershell

I'm using Visual Studio Online's build tools to deploy web applications from a single solution. I've occasionally been running into file locking issues.
Error: Web Deploy cannot modify the file 'Microsoft.CodeAnalysis.CSharp.dll' on the destination because it is locked by an external process.
After some Googling, I believe the "fix" is to stop the web applications before deployment on Azure and start it back up after. Sounds legit.
However, there does not seem to be a straight forward way to do this directly on VSO's build definitions. I've created an "Azure Powershell" build task, but it wants a PS1 file from the repository. It doesn't seem to let me just run Azure Powershell commands (e.g. Stop-AzureWebsite) from here. My team has created a work-around where we have a "run.ps1" that just executes the command you pass as a parameter, but none of us are satisfied by that.
What are we missing? There has got to be an easier way to do this without having a PS1 script checked into source control.
I solved this by installing Azure App Services - Start and Stop extension from Visual Studio Marketplace.
When installed, it will allow you to wrap the Deploy Website to Azure task in your Release definition with Azure AppServices Stop and Azure AppServices Start tasks, effectively eliminating the lock issues.
Check if you are using "/" on the "Web Deploy Package" path for folder separators instead of "\".
i.e. change
$(System.DefaultWorkingDirectory)/My Project/drop/MyFolder/MyFile.zip
for
$(System.DefaultWorkingDirectory)\My Project\drop\MyFolder\MyFile.zip
I noticed that was the only difference between the one I was getting the error and the others (the Restart step I added was not helping). Once I modified the path, I got it working.
Sounds crappy, but fixed my issue.
Did you use the Build Deployment Template that sets the correct msbuild parameters for you for your package? You can see how here. I would create a build using that template and see if you have the same issues. If so ping me on Twitter #DonovanBrown and I will see if I can figure what is going on.
As a rule it is good practice to have any scripts or commands required to deploy your software to be checked into source control as part of your build. They can then be easily run repeatedly with little configuration at the build level. This provides consistency and transparency.
Even better is to have deployment scripts output as part of the build and use a Release Management tool to control the actual deployment.
Regardless having configuration as code is a mantra that all Dev and Ops teams should live by.

Plugin Architecture - One host with many Pipeline folders

We currently have an application that is a plugin host, thus having the "Pipeline" folder in it's application directory. All of the plugins that are managed through this host are plugins relating to a windows service that is running, and that windows service is basically for managing one county for the purpose of this example.
What we want to achieve is to be able to install multiple instances of this windows service and to manage each of these through the host application. Our original thought was to have several "Pipeline" folders, one for each county which manages it's instance of the windows service but I don't see how we are going to do this since it seems like the "Pipeline" folder naming convention is set in stone and there is no way to dynamically point your application to a specific "Pipeline" folder.
Any thoughts?
Seems like I always dig up the answer after posting...
There is a parameter on the FindAddIns method used to pass the pipeline root. This should work just fine.

How to parameterize Bamboo builds?

Please note, although my specific example here involves Java/Grails, it really applies to any type of task available in Bamboo.
I have a task that is a part of a Bamboo build where I run a Java/Grails app like so:
grails run-app -Dgrails.env=<ENV>
Where "<ENV>" can be one of several values (dev, prod, staging, etc.). It would be nice to "parameterize" the plan so that, sometimes, it runs like so:
grails run-app -Dgrails.env=dev
And other times, it runs like so:
grails run-app -Dgrails.env=staging
etc. Is this possible, if so, how? And does the REST API allow me to specify parameter info so I can kick off different-parameterized builds using cURL or wget?
This seems to be a work around but I believe it can help resolve your issue. Atlassian has a free plugin call Bamboo Inject Variables Plugin. Basically, with this plugin, you can create an "Inject Bamboo Variables from file" task to read a variable from a file.
So the idea here is to have your script set the variable to a specific file then kick off the build; the build itself will read that variable from the file and use it in the grails task.
UPDATE
After a search, I found that you can use REST API to change plan variables (NOT global). This would make your task simpler: just define a plan variable (in Plan Configuration -> tab Variables) then change it every time you need to. The information on how to change is available at Bamboo Knowledge Base

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.

Sharing a fabfile across multiple projects

Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself:
copying the fabfile.py from one Django project to another and
modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.).
One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by:
being able to pip install the common tasks defined in the fabfile.py and
having a fab_config file containing the host configuration information for each project and overriding any tasks as needed
Any recommendations on how to increase the DRYness of my Fabric workflow?
I've done some work in this direction with class-based "server definitions" that include connection info and can override methods to do specific tasks in a different way. Then my stock fabfile.py (which never changes) just calls the right method on the server definition object.