Link two instances of Apache Archiva - archiva

We have two distributed dev teams. Each with it's own Archiva instance. Each team deploys artifacts into their own repository. I'd like to configure each of these Archiva instances so that they each can discover artifacts from the other.
I set up Proxy Connectors on each Archiva.
Archiva A -> Archiva B
Archiva B -> Archiva A
But when someone requests an artifact one of the two Archiva's hangs, requiring the service to be rebooted.
Is this Proxy Connector configuration creating an infinite loop? Perhaps I'm confused by how they work. If they shouldn't be configured in this manner, how would I solve my original problem of two Archiva's sharing artifacts?

are you using some black/white lists to limit the proxying to only your company groupId ?
Hangs but nothing in the logs ?

Related

CI/CD for multi-tenant application with single repository but multiple clients

I have a database-driven application with a single code base configured for multiple clients using the database setting and config files.
The main code base consists of common/core code/files that are being used by all the clients and some client-specific code/files. Both types of files are in different folders of the same repository.
We have been planning to integrate CI/CD using GitHub and Jenkins. I am new to Jenkins.
In GitHub, we have a single repository that contains all the code/files. I want to use Jenkins to deploy to different client environments but make sure that only files related to a specific client should be deployed to that client environment.
What could be the best way or possible solutions for this?
Edit: Basically I want to deploy specific files that are client related to specific client environments.
Any and all suggestions would be highly appreciated.

How to deploy EAR/WAR in Jboss using Puppet

For a project I am working on, we have CI setup using Jenkins.
We now want to setup Continuous Delivery (CD) using Puppet.
Here is our dev environment specs
Windows 2008 Server
Jboss-SOA-P (jboss AS 5.1 app server) - 2 instances
Jenkins for CI
Installed Puppet Learning VM (as we are evaluating, so we don't have a license to install Puppet Enterprise).
My question is: How can I automate deployment of my application(s) on already installed Jboss servers (on Windows m/c) using Puppet?
For my organization, I am using the following tools to achieve Continuous Delivery and Continuous Integration
Foreman: A provisioning and ECN ( External Node Classifier )
Puppet Master :
This will be running in our main server
Puppet Agents:
On rest of the servers
Jenkins:
On main server
Nexus repository for maintaining staging and release repositories on another server
A nexus repository module installed in puppet master. It has the logic to connect to nexus repository, fetch the latest release from "release" repository.
Flow:
I have a jenkins job defined whose purpose is making sure the build doesn't break and at the time of release, I perform maven release
that in turn upload the latest version to nexus release repository.
Load nexus repository module in foreman and map "Nexus" class to all my servers. This is the crucial part that allows me to perform
cloud deployment with a single button click in foreman. To do this, you need to have your own puppet files that perform dbmigration, undeploy and deployment. I always write a deploy maven module for all my projects which will have only these puppet files that allows me to perform deployment in all servers in one shot.
Since you are already familiar with CI and CD, I hope my statements are self explanatory
Without knowing how you want your application deployed, it's hard to answer exactly how to do it. But I'll see if I can point you in the right direction:
There are existing modules on the forge to deploy JBoss with Puppet. I'd recommend looking through and seeing if you can find one that suits your requirements.
You could then integrate your source control to automatically deploy changes to your JBoss instance as they get checked in.
There's an example of using Puppet with Tomcat and Maven for continuous delivery here. It's a few years old, but the concepts still apply: http://www.slideshare.net/carlossg/2013-02-continuous-delivery-apache-con
Here's also an example from CloudBees for contionous delivery with Puppet and Jenkins (which has a lot of the engineers behind Jenkins) https://www.cloudbees.com/event/continuous-delivery-jenkins-and-puppet-debug-bad-bits-production
Plus a general PuppetLabs one here: https://puppetlabs.com/blog/whats-continuous-delivery-get-speed-these-great-puppetconf-decks
Also: I don't want to sound too salesly, and full disclosure, I work at PuppetLabs. But you can use Puppet Enterprise with up to 10 nodes for free, so I'd recommend that over using the Learning VM, which isn't really designed for hosting applications.

Deploy using artifact from artifactory

Is there a way in Bamboo to deploy artifacts from artifactory rather than only local published artifacts? I've found the Artifactory Plugin but as far as I could see, it only allows for deploying stuff into artifactory.
I'm using Bamboo 5.4.2
You can use your build server to deploy from Artifactory to your application server, that's a very detoured way to go. You already uploaded all the binaries to Artifactory why would you want to download them to the build server again?
You have number of ways to get the needed files to your application server right from Artifactory, without involving the CI server, and the selection depends on how complicated your requirements are. If all you need is to get the latest version of some artifact from Artifactory to app server, tools like LiveRebel are a great match. If you need to do more, e.g. deploy on sophisticated topology of clustered environment with sharded data schema upgrade without downtime, you might need something more free-style like Puppet, Chef, Ansible, or Salt.
In any way, Artifactory Properties and the REST API to work with them are your best friend. Using properties in your REST queries for artifacts allows expressing queries like "Give me all the artifacts that were produced by certain Bamboo build, but only those, which were staged, have the QA level of 'production' and matching the target deployment target".

Indexed repositories within Artifactory

Apologies if this is obvious to everyone else...
I've deployed the Artifactory war file within tomcat6 and started the server: all looks great.
Now, I want to navigate around the preconfigured repositories, for instance repro1-cache. However, it appears it's empty, there are no tree elements to expand. This appears to be the story for all the listed repositories. Consequently I cant run any searches for particular artifacts.
Am I missing a stage here? Do I need to force it to index itself? What should I be expecting once I've deployed the war file and when I first log in?
I guess my expectation was that once having deployed the war file, Artifactory would automatically index the remote repositories. I'd then configure Eclipse to point at the Artifactory install, so that it can index the repositories within the IDE. Then when I declare a new dependency, Artifactory would download and cache it locally, allowing for faster resolution next time. Is this a valid expectation?
Any feedback will be most appreciated, particularly any pointers to user documentation that covers this that I've overlooked.
Your repositories are all empty because they aren't populated.
Once you deploy artifacts to the local repositories or request artifacts from remote repositories, you'll see them in the browser.
If you'd like to browse through artifacts not yet cached in remote repositories, you can use Artifactory's simple browser (see the Remote Browsing section).
Maven Indexes can be created and retrieved manually or as a recurring task; the Indexer can be configured in Artifactory's admin UI in Admin->Services->Indexer (also see the Indexer's wiki page).

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.