Implement PowerShell DSC with existing continuous Deployment Process - powershell

I am doing PowerShell DSC POC. I configured the Pull server and one client machine. It is working fine and I am very much happy about PowerShell DSC feature.
Now I want to integrate this feature with our continuous Integration process. We are using Nolio for MSI deployment and do the other configurations. As of now I want to use DSC only for configurations and Nolio will continue for deployment process (due to reduce the Migration complexity). Later we planned to replace the Nolio with DSC including deployment. Here is my questions.
1) We have monthly releases. As per my understanding I need to install the MSI(will deploy websites) in all machines including Pull Server and Nodes. Then I will do the configuration settings using Pull Server configuration. Once I configured the Pull server how could I do the second deployment. Will Pull server node create any problem at the time of deployment like reverting the installed files as per old configurations? Is there any way to stop the Pull server settings at the time of deployment?
2) If I want to install MSI also from DSC, I am planning to do that like below.
Change the Pull server configuration to install the MSI from other configuration settings.
Install the MSI in Pull server and all Node machines.
Do all other configurations in Pull server.
Change the configuration to apply pull server configurations to
Node from Install MSI.
Is this good process?
Could you please anyone help me to achieve this? Please share if you have any other best practices.
Thanks in advance.

Related

Azure DevOps IIS deployment without WinRM

What options are there to deploy a web application to a heavily locked down machine without WinRM?
The situation is as followed.
Code is in Azure DevOps cloud
Release server is in a semi-secured area with access to download artifacts from DevOps
Target server is in a very locked down zone.
If release server can only copy files to a specific temporary folder target machine, is there a way to do deployment to it without WinRM?
My initial thought is to have a script on the Target machine to watch for the artifact showing up and deploy it. I want to know if there's a better way or if that's my best option?
If release server can only copy files to a specific temporary folder
target machine, is there a way to do deployment to it without WinRM?
If you've read document Deploy your Web Deploy package to IIS servers using WinRM, you would find the notice below the title:
A simpler way to deploy web applications to IIS servers is by using deployment groups instead of WinRM.
So you can consider using Deployment Group as a simpler direction. And here're some discussions(#1, #2) which may help you to do a choice between WinRM and Deployment Group depending on your needs.
Update1:
My initial thought is to have a script on the Target machine to watch
for the artifact showing up and deploy it. I want to know if there's a
better way or if that's my best option?
In your specific scenario, it's one choice when the target server cannot have line of sights to the Azure DevOp/TFS server and you can't(or maybe not want to) use WinRM.

OnPrem TFS 2015.1 vNext - What step to Release to on premises IIS server?

I'm trying to use TFS 2015.1 on premise to build a CI pipeline for our dev & uat. I've created a vNext CI build, which builds fine. But when I want to add a deploy step for on prem IIS server, I only then see Azure Web Deployment options.
Ideally I wanted to add a step which uses the existing deploy (MS Deploy) profiles, which I'm able to use from VS2015 directly, using 'Publish'. However I see no option to do so.
How can I deploy the latest build to internal dev servers (not Azure)? I would like to use the MS Deploy option, unless there's a better way of doing it?
The fact that their is no option to starts to make me think there's probably a different way to accomplish it!
Thanks.
If you're able to upgrade to TFS 2015.2, web-based Release Management came out with it that works similarly to Build vNext with flexible and open-source tasks. You can also customize tasks.
Here's a link for IIS Web App Deployment from the vso-agent-task's GitHub repo where Microsoft stores updated versions of their tasks that you can download for web-based Build and Release Management.
I'll be publishing a blog about web-based RM with TFS 2015 Update 2 or VSTS on my website in the next few weeks. To give you an idea though, the starting point (for a web application) is a folder in your web project called WebDeploy (no significance - any name will do) that contains a PowerShell DSC script that configures the server, deploys the web files and then replaces any tokenised configs. To give you an idea see this post about how to use DSC to configure servers. (Only covers part of the final script though!) The next steps are:
In the build hub create a Website artifact - containing your web files and DSC script.
In the release hub for an environment use a Windows Machine File Copy task to deploy the artifact to a temp folder on the target node.
Then use a PowerShell on Target Machines task to execute the DSC script. After configuring the server the script copies the web files to their proper location, sorts out config using xReleaseManagement and cleans up the WebDeploy folder.
See this article for general details of the route I'm taking, but watch out as it has some errors eg the firewall instructions are incomplete (file and print sharing through the firewall needs to be enabled).
I can thoroughly recommend the PowerShell DSC route - I've had a few glitches but on the whole it feels very productive and the right way to be going.

How to deploy EAR/WAR in Jboss using Puppet

For a project I am working on, we have CI setup using Jenkins.
We now want to setup Continuous Delivery (CD) using Puppet.
Here is our dev environment specs
Windows 2008 Server
Jboss-SOA-P (jboss AS 5.1 app server) - 2 instances
Jenkins for CI
Installed Puppet Learning VM (as we are evaluating, so we don't have a license to install Puppet Enterprise).
My question is: How can I automate deployment of my application(s) on already installed Jboss servers (on Windows m/c) using Puppet?
For my organization, I am using the following tools to achieve Continuous Delivery and Continuous Integration
Foreman: A provisioning and ECN ( External Node Classifier )
Puppet Master :
This will be running in our main server
Puppet Agents:
On rest of the servers
Jenkins:
On main server
Nexus repository for maintaining staging and release repositories on another server
A nexus repository module installed in puppet master. It has the logic to connect to nexus repository, fetch the latest release from "release" repository.
Flow:
I have a jenkins job defined whose purpose is making sure the build doesn't break and at the time of release, I perform maven release
that in turn upload the latest version to nexus release repository.
Load nexus repository module in foreman and map "Nexus" class to all my servers. This is the crucial part that allows me to perform
cloud deployment with a single button click in foreman. To do this, you need to have your own puppet files that perform dbmigration, undeploy and deployment. I always write a deploy maven module for all my projects which will have only these puppet files that allows me to perform deployment in all servers in one shot.
Since you are already familiar with CI and CD, I hope my statements are self explanatory
Without knowing how you want your application deployed, it's hard to answer exactly how to do it. But I'll see if I can point you in the right direction:
There are existing modules on the forge to deploy JBoss with Puppet. I'd recommend looking through and seeing if you can find one that suits your requirements.
You could then integrate your source control to automatically deploy changes to your JBoss instance as they get checked in.
There's an example of using Puppet with Tomcat and Maven for continuous delivery here. It's a few years old, but the concepts still apply: http://www.slideshare.net/carlossg/2013-02-continuous-delivery-apache-con
Here's also an example from CloudBees for contionous delivery with Puppet and Jenkins (which has a lot of the engineers behind Jenkins) https://www.cloudbees.com/event/continuous-delivery-jenkins-and-puppet-debug-bad-bits-production
Plus a general PuppetLabs one here: https://puppetlabs.com/blog/whats-continuous-delivery-get-speed-these-great-puppetconf-decks
Also: I don't want to sound too salesly, and full disclosure, I work at PuppetLabs. But you can use Puppet Enterprise with up to 10 nodes for free, so I'd recommend that over using the Learning VM, which isn't really designed for hosting applications.

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.