Deploy using artifact from artifactory - deployment

Is there a way in Bamboo to deploy artifacts from artifactory rather than only local published artifacts? I've found the Artifactory Plugin but as far as I could see, it only allows for deploying stuff into artifactory.
I'm using Bamboo 5.4.2

You can use your build server to deploy from Artifactory to your application server, that's a very detoured way to go. You already uploaded all the binaries to Artifactory why would you want to download them to the build server again?
You have number of ways to get the needed files to your application server right from Artifactory, without involving the CI server, and the selection depends on how complicated your requirements are. If all you need is to get the latest version of some artifact from Artifactory to app server, tools like LiveRebel are a great match. If you need to do more, e.g. deploy on sophisticated topology of clustered environment with sharded data schema upgrade without downtime, you might need something more free-style like Puppet, Chef, Ansible, or Salt.
In any way, Artifactory Properties and the REST API to work with them are your best friend. Using properties in your REST queries for artifacts allows expressing queries like "Give me all the artifacts that were produced by certain Bamboo build, but only those, which were staged, have the QA level of 'production' and matching the target deployment target".

Related

Using artifact repository for storing full releases?

I've been looking into artifact repositories for something that our release team can use for storing outputs of full builds from multiple projects. From what I've read, artifact repositories are mostly used for storing library files required for a build. My assumption is that their intended use is to ensure developers and build servers are using the exact same binary dependencies during build process.
Few questions:
Is it possible to store the build output of entire projects into an artifact repository (A full release), a place to store artifacts ready for deployment?
Is this common practice?
Is it possible to have analytics of what was changed since the last build? Ex: can I see which artifacts have changed since the last release?
So, the short answer to your questions are: yes, yes, and mostly yes.
While it is true that Binary Managers such as Artifactory are used for dependency management they are also used to host entire builds.
In Artifactory this can be easily achieved through the Build Integration features. If you are not using any CI server such as Jenkins (for example) you can use the JFrog CLI to upload your builds and their corresponding Build Info.
In addition, with regards to analytics, not exactly as such, but in Artifactory you have the option to perform Build Diff and see the changes between builds.
Hope I helped,
Eran
p.s. I work for JFrog
Using Sonatype Nexus woks for what you need, you are able to deploy not just Java artifacts (example: .ear, .jar, .war files) you are able to deploy any kind of binaries, we are using it for storing reports for Orace BI Publisher, or .exe binaries.
Is it possible to store the build output of entire projects into an artifact repository (A full release), a place to store artifacts ready for deployment?
Yes, as I said before, you can store any kind of binaries you want.
Is this common practice?
I don't know if it is a common practice, but in my case It helped us to keep an order. Just evaluate if it works for you.
Is it possible to have analytics of what was changed since the last build? Ex: can I see which artifacts have changed since the last release?
Sonatype Nexus handle a version for each artifact (or binary) so you are able to store all the "history" from your deployments, also it is able to handle security policy for example you could not deploy the same binary twice with the same version it forces you deploy a new version in this way you can verify when an artifact has changed, the date and who uploaded the artifact.
This is how it looks like:

Builtin Octopus deploy repository

We're using Teamcity CI for build and Octopus Deploy for deployment.
We want to use Builtin Octopus deploy repository for storing Artifacts instead of teamcity repo. What are the differeneces between them?
Can you help me evaluate the built in Octopus repository. Pro/Cons, any complications you might be facing.
Thanks.
One of the key differences is that TeamCity can be used as an externally accessible NuGet server, but Octopus Deploy can't expose any packages it knows about. If you're building components in TeamCity that are exposed as NuGet packages and reused within applications then Octopus Deploy won't be able to handle that scenario.
If you're just building applications and exposing them for Octopus Deploy then my advice would be to push them to Octopus Deploy to manage, otherwise you end up duplicating on disk space as there'll be a copy of the package in TeamCity and a copy of the package in Octopus Deploy once it has downloaded it from the TeamCity NuGet feed.
Hope this helps.
The inbuilt Octopus Deploy repository allows you to automatically create and deploy a release as soon as it is packaged and published (usually during a server build). This is great if you want to schedule nightly builds so that your development/test/integration environment is always update to date.
External package repositories cannot be used to automatically create
releases, only the built-in package repository is supported.
It also maintains packages through a retention policy so you don't have to worry about running out of disk space.
We use two NuGet repositories. One for application packages deployed through Octopus Deploy, and one for shared packaged reusable components using NuGet.Server.

How to deploy EAR/WAR in Jboss using Puppet

For a project I am working on, we have CI setup using Jenkins.
We now want to setup Continuous Delivery (CD) using Puppet.
Here is our dev environment specs
Windows 2008 Server
Jboss-SOA-P (jboss AS 5.1 app server) - 2 instances
Jenkins for CI
Installed Puppet Learning VM (as we are evaluating, so we don't have a license to install Puppet Enterprise).
My question is: How can I automate deployment of my application(s) on already installed Jboss servers (on Windows m/c) using Puppet?
For my organization, I am using the following tools to achieve Continuous Delivery and Continuous Integration
Foreman: A provisioning and ECN ( External Node Classifier )
Puppet Master :
This will be running in our main server
Puppet Agents:
On rest of the servers
Jenkins:
On main server
Nexus repository for maintaining staging and release repositories on another server
A nexus repository module installed in puppet master. It has the logic to connect to nexus repository, fetch the latest release from "release" repository.
Flow:
I have a jenkins job defined whose purpose is making sure the build doesn't break and at the time of release, I perform maven release
that in turn upload the latest version to nexus release repository.
Load nexus repository module in foreman and map "Nexus" class to all my servers. This is the crucial part that allows me to perform
cloud deployment with a single button click in foreman. To do this, you need to have your own puppet files that perform dbmigration, undeploy and deployment. I always write a deploy maven module for all my projects which will have only these puppet files that allows me to perform deployment in all servers in one shot.
Since you are already familiar with CI and CD, I hope my statements are self explanatory
Without knowing how you want your application deployed, it's hard to answer exactly how to do it. But I'll see if I can point you in the right direction:
There are existing modules on the forge to deploy JBoss with Puppet. I'd recommend looking through and seeing if you can find one that suits your requirements.
You could then integrate your source control to automatically deploy changes to your JBoss instance as they get checked in.
There's an example of using Puppet with Tomcat and Maven for continuous delivery here. It's a few years old, but the concepts still apply: http://www.slideshare.net/carlossg/2013-02-continuous-delivery-apache-con
Here's also an example from CloudBees for contionous delivery with Puppet and Jenkins (which has a lot of the engineers behind Jenkins) https://www.cloudbees.com/event/continuous-delivery-jenkins-and-puppet-debug-bad-bits-production
Plus a general PuppetLabs one here: https://puppetlabs.com/blog/whats-continuous-delivery-get-speed-these-great-puppetconf-decks
Also: I don't want to sound too salesly, and full disclosure, I work at PuppetLabs. But you can use Puppet Enterprise with up to 10 nodes for free, so I'd recommend that over using the Learning VM, which isn't really designed for hosting applications.

Sugested way of working - Jenkins promotions or artifactory releases will deploy a war

We have a jenkins Job that package a WAR snapshot on every commit on SVN.
We also use the Release plugin that generate a versioned WAR on artifactory.
example:web:1.1-SNAPSHOT >> 1.1
We want include the deployment task on the jenkins work flow. On different project we also work with the promote plugin.
We are not sure which is the better approach for work with the automated deployment task, based on the number of future problems that we could found.
The first solution planned is :
Use the release plugin for generate a release stagging.
Use the promotion plugin for authorize the automated deployment.
This promotion launch a different job that download the last available WAR file from artifactory and deploy it.
We have discused if we can do it on the same "promotion action" or found a different solution.
Which solution is the most common for those cases? How we can restrict the accidental deployment of unauthorized versions?
Don't deploy the latest version, since you'll unintentionally deploy the wrong version, sooner or later. Use parameterized builds to deploy a particular version. The deploy-to-artifactory job sets the parameter and uses the parameterized trigger plugin to kick off all deploy-to-machine jobs.
You may want to parameterize all jobs in the pipeline after the deploy-to-artifactory job. I think there are other plugins that put the parameter into an entire pipeline, but I can't see them at the moment. There is a wide range of plugins that you can leverage in this workflow to suit your needs, such as the BuildResultTrigger plugin and the Build Flow plugin. And matrix builds are great for deploying to a range of machines, OSes, etc.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.