Multiple installations of my app - how do I handle it - deployment

I have an app written in PHP, MySQL, etc. The app has a few dependencies such as beanstalkd, Solr and a few PHP extensions.
For each customer we have a separate installation of the app, either on a server shared with other customers or on a server with only that customer.
For now we're using a Puppet script to bootstrap new customers and then we manually go to each customer to make a git pull, update the db, etc., whenever something changes.
What we're looking for is really a tool that has as many of the following features as possible:
Web interface that allows us to see all customers and their current revision
Ability to bootstrap new installations
Ability to update existing installations to a specific revision or branch
We're not looking for a tool to bootstrap new servers - we still do that manually. Instead we're looking for a way to automate the setup of clients on an existing server.
Would Chef or Puppet be sufficient for this, is there a more suitable tool, or would you recommend rolling something ourselves?

I'm a full time developer working on Puppet at Puppet Labs. I'm also the co-author of Pro Puppet.
Puppet is certainly sufficient for your goals. Here's one way to solve this problem using Puppet. First, I'll address the dependency management since these should only be managed once regardless of how many instances of the application are being managed. Then, I'll address how to handle multiple installations of your app using a defined resource type in Puppet and the vcsrepo resource type.
First, regarding the organization of the puppet code to handle multiple installations of the same app. The dependencies you mention such as beanstalkd, solr, and the PHP extensions should be modeled using a Puppet class. This class will be included in the configuration catalog only once, regardless of how many copies of the application are managed on the node. An example of this class might be something like:
# <modulepath>/site/manifests/app_dependencies.pp
class site::app_dependencies {
# Make all package resources in this class default to
# being managed as installed on the node
Package { ensure => installed }
# Now manage the dependencies
package { 'php': }
package { 'solr': }
package { 'beanstalk': }
# The beanstalk worker queue service needs to be running
service { 'beanstalkd':
ensure => running,
require => Package['beanstalk'],
}
}
Now that you have your dependencies in a class, you can simply include this class on the nodes where your application will be deployed. This usually happens in the site.pp file or in the Puppet Dashboard if you're using the web interface.
# $(puppet config print confdir)/manifests/site.pp
node www01 {
include site::app_dependencies
}
Next, you need a way to declare multiple instances of the application on the system. Unfortunately, there's not an easy way to do this from a web interface right now but it is possible using Puppet manifests and a defined resource type. This solution uses the vcsrepo resource to manage the git repository checkout for the application.
# <modulepath>/myapp/manifests/instance.pp
define myapp::instance($git_rev='master') {
# Resource defaults. The owner and group might be the web
# service account instead of the root account.
File {
owner => 0,
group => 0,
mode => 0644,
}
# Create a directory for the app. The resource title will be copied
# into the $name variable when this resource is declared in Puppet
file { "/var/lib/myapp/${name}":
ensure => directory
}
# Check out the GIT repository at a specific version
vcsrepo { "/var/lib/myapp/${name}/working_copy":
ensure => present,
provider => git,
source => 'git://github.com/puppetlabs/facter.git',
revision => $git_rev,
}
}
With this defined resource type, you can declare multiple installations of your application like so:
# $(puppet config print confdir)/manifests/site.pp
node www01 {
include site::app_dependencies
# Our app instances always need their dependencies to be managed first.
Myapp::Instance { require => Class['site::app_dependencies'] }
# Multiple instances of the application
myapp::instance { 'jeff.acme.com': git_rev => 'tags/1.0.0' }
myapp::instance { 'josh.acme.com': git_rev => 'tags/1.0.2' }
myapp::instance { 'luke.acme.com': git_rev => 'tags/1.1.0' }
myapp::instance { 'teyo.acme.com': git_rev => 'master' }
}
Unfortunately, there's not currently an easy to use out of the box way to make this information visible from a web GUI. It is certainly possible to do, however, using the External Node Classifier API. For more information about pulling external data into Puppet please see these resources:
External Nodes Documentation
R.I. Pienaar's Hiera (Hierarchical data store)
Hope this information helps.

Related

Is it possible to notify a service on a different host with puppet?

I have a puppet module for host-1 doing some file exchanges.
Is it possible to inform another puppet agent on host-2 (i.e. with a notify) about a change made on host-1?
And if it is possible, what would be a best practice way to do that?
class fileexchangehost1 {
file { '/var/apache2/htdocs':
ensure => directory,
source => "puppet:///modules/${module_name}/var/apache2/htdocs",
owner => 'root',
group => 'root',
recurse => true,
purge => true,
force => true,
notify => Service['restart-Service-on-host-2'],
}
}
Many have asked this question and at various times there has been talk of implementing a feature to make it possible. But it's not possible, and not likely to be possible any time soon.
Exported resources was considered an early solution to problems similar to this, although some e.g. here have argued it is not a good solution and I don't see exported resources used often nowadays.
I think, nowadays, the recommended approach would be to keep-it-simple, and use something like Puppet Bolt to simply run commands on node A, and then on node B, in order.
If not Puppet Bolt, you could also use MCollective's successor, Choria, or even Ansible for this.
Puppet has no direct way of notifying a service on one host from the manifest of another.
That said, could you use exported resources for this? We use exported resources with Icinga, so one host generates Icinga configuration for itself, then exports it to the Icinga server, which restarts the daemon.
For example, on the client host:
##file { "/etc/icinga2/conf.d/puppet/${::fqdn}.conf":
ensure => file,
[...]
tag => "icinga_client_conf",
}
And on the master host:
File <<| tag == "icinga_client_conf" |>> {
notify => Service['icinga2'],
}
In your case there doesn't appear to be a resource being exported, but would this give you the tools to build something to do what you need?

How to read multiple config file from Spring Cloud Config Server

Spring cloud config server supports reading property files with name ${spring.application.name}.properties. However I have 2 properties files in my application.
a.properties
b.properties
Can I get the config server to read both these properties files?
Rename your properties files in git or file system where your config server is looking at.
a.properties -> <your_application_name>.properties
a.properties -> <your_application_name>-<profile-name>.properties
For example, if your application name is test and you are running your application on dev profile, below two properties will be used together.
test.properties
test-dev.properties
Also you can specify additional profiles in bootstrap.properties of your config client to retrieve more properties files like below. For example,
spring:
profiles: dev
cloud:
config:
uri: http://yourconfigserver.com:8888
profile: dev,dev-db,dev-mq
If you specify like above, below all files will be used together.
test.properties
test-dev.properties
test-dev-db.prpoerties
test-dev-mq.properties
Note that the provided answer assumes your property files address different execution profiles. If they dont, i.e., your properties are split into different files for some other reason, e.g., maintenance purposes, divided by business/functional domain, or any other reason that suits your needs, then, by defining a profile for each such file, you are just "abusing" the profile feature, for achieving your goal (multiple property files per app).
You could then ask "OK, so what is the problem with that?". The problem is that you restrain yourself from various possibilities that you would otherwise have. If you actually want to customize your application configuration by profile you will have to create pseudo, sub, profiles for that since the file name is already a profile. Example:
Your application configuration could be customized by different profiles, which you use inside your springboot application (e.g. in #Profile() annotation), let them be dev, uat, prod. You can boot your application setting different profiles as active, e.g. 'dev' vs 'uat', and get the group of properties that you desire. For your a.properties b.properties and c.properties file, if different file names were supported, you would have a-dev.properties b-dev.properties and c-dev.properties files vs a-uat.properties b-uat.properties and c-uat.properties files for 'dev' and 'uat' profile.
Nevertheless, with the provided solution, you already have defined 3 profiles for each file: appname-a.properties appname-b.properties, and appname-c.properties: a, b, and c. Now imagine you have to create a different profile for each... profile(! it already shows something goes wrong here)! you would end up with a lot of profile permutations (which would get worse as files increase): The files would be appname-a-dev.properties, appname-b-dev.properties, app-c-dev.properties vs appname-a-uat.properties, appname-b-uat.properties, app-c-uat.properties, but the profiles would have been increased from ['dev', ' uat'] to ['a-dev', 'b-dev', 'c-dev', 'a-uat', 'b-uat', 'c-uat'] !!!
Even worse, how are you going to cope with all these profiles inside your code and more specifically your #Profile() annotations? Will you clutter the code space with "artificial" profiles just because you want to add one or two more different property files? It should have been sufficient to define your dev or uat profiles, where applicable, and define somewhere else the applicable property file names (which could then be further supported by profile, without any other configuration action), just as it happens in the externalized properties configuration for individual springboot apps
For argument completeness, I will just add here that if you want to switch to .yml property files one day, with the provided profile-based naming solution, you also loose the ability to define different "yaml document sections per profile" inside the same .yml file (Yes, in .yml you can have one property file yet define multiple logical yml documents inside, which its usually done for customizing the properties for different profiles, while having all related properties in one place). You loose the ability because you have already used the profile in the file name (appname-profile.yml)
I have issued a pull request with a minor fix for spring-cloud-config-server 1.4.x, which allows defining additionally supported file names (appart from "application[-profile]" and "{appname}[-profile]", that are currently supported) by providing a spring.cloud.congif.server.searchNames environment property - analogous to spring.config.name for springboot apps. I hope it gets reviewed and accepted.
I came across the same requirement lately with a little more constraint that I am not allowed to play around the environment profiles. So I wasn't allowed to do as the accepted answer. I'm sharing how I did it as an alternative to those who might have same case as me.
In my application, I have properties such as:
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
application.properties // for my use, contains local properties only
bootstrap.properties // for my use, contains management properties only
In my application, I have these particular properties set that allow me to achieve what I needed. But note I have the rest of needed config as well (enable cloud config, actuator refresh, eureka service discovery and so on) - just highlighting these for emphasis:
spring.application.name=appxyz
spring.cloud.config.name=appxyz-data-soures,appxyz-interfaces,appxyz-feature
You can observe that I didn't want to play around my application name but instead I used it as prefix for my config property files.
In my configuration server I configured in application.yml to capture pattern: 'appxyz-*':
spring:
cloud:
config:
server:
git:
uri: <git repo default>
repos:
appxyz:
pattern: 'appxyz-*'
uri: <another git repo if you have 1 repo per app>
private-key: ${git.appxyz.pk}
strict-host-key-checking: false
ignore-local-ssh-settings: true
private-key: ${git.default.pk}
In my Git repository I have the following. No application.properties and bootstrap because I didn't want those to be published and overridden/refreshed externally but you can do if you want.
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
It will be the pattern matching pattern: 'appxyz-*' that will capture and return the matching files from my git repository. The profile will also apply and fetch the correct property file accordingly. The prioritization of value is also preserved.
Furthermore, if you wish to add more file in your application (say appxyz-circuit-breaker.properties), we only need to do:
Add the name pattern in the spring.cloud.config.name=...,appxyz-circuit-breaker
The add the copies of the file locally and also externally (in the git repo.
No need to add/modify more or restart your configuration server later on. For new application, it's like a one time registration thing to add an entry under the repos of application.yml.
Hope it helps in one way or another!
In your application bootstrap.properties, you have to specify like below:
spring.application.name=a,b

Installing a dependent module that my team wrote

How do I tell DSC that a resource/module from our internal code repository (not on a private Gallery feed), first?
Do I just use a basic Script Resource and bring the files down (somehow) into $PSModulePath and import them?
Update
There's a CmdLet called Get-DSCResource that will list the available resources on a system, i.e. that reside in the correct path(s), and provide some information that can be used with Import-DscResource, which is a 'dynamic keyword' that is placed within a Configuration block in a DSC script to declare the dependencies.
As for getting the resources/modules down to the target system, I'm not sure yet.
If you are using a dsc pull server then you just need to make sure that your custom module(s) are on that server. I usually put them in program files\windowspowershell\modules.
In the configuration you can just specify that you want to import your custom module and then proceed with the custom dsc resource
Configuration myconfig {
Import-DSCResource customModule
Node somenode {
customresource somename {
}
}
}
If you dont have a pull server and you want to push configurations then you have to make sure that your custom modules are on all target systems. you can use the DSC file resource to copy the modules or maybe just use a ps script or any other means to copy them and then use DSC for your custom configurations.

How to deploy with Release Management to remote datacenter

We are running TFS and Release Management on premises, and i want to deploy my applications to a remote datacenter.
Access is over the internet, so there is no windows shares available.
I am using the vNext templates, and afaik RM seems to only support unc paths over windows shares.
How can i use Release Management to deploy software to this datacenter?
Im working on this solution:
Use WebDav on a IIS located inside the datacenter.
RM server and Target can use the WebDav client built into windows and access it by an unc path.
I haven't gotten this to work yet, as RM won't use the correct credentials to logon to the webdav server.
Updated with my solution
This is only a proof of concept, and is not production tested.
Setup a WebDav site accessible from both RM server and Target server
Install the feature "Desktop experience" on both servers
Make the following DLL
using System;
using System.ComponentModel.Composition;
using System.Diagnostics;
using System.IO;
using Microsoft.TeamFoundation.Release.Common.Helpers;
using Microsoft.TeamFoundation.Release.Composition.Definitions;
using Microsoft.TeamFoundation.Release.Composition.Services;
namespace DoTheNetUse
{
[PartCreationPolicy(CreationPolicy.Shared)]
[Export(typeof(IThreadSafeService))]
public class DoTheNetUse : BaseThreadSafeService
{
public DoTheNetUse() : base("DoTheNetUse")
{}
protected override void DoAction()
{
Logger.WriteInformation("DoAction: [DoTheNetUse]");
try
{
Logger.WriteInformation("# DoTheNetUse.Start #");
Logger.WriteInformation("{0}, {1}", Environment.UserDomainName, Environment.UserName);
{
Logger.WriteInformation("Net use std");
var si = new ProcessStartInfo("cmd.exe", #"/c ""net use \\sharedwebdavserver.somewhere\DavWWWRoot\ /user:webdavuser webdavuserpassword""");
si.UseShellExecute = false;
si.RedirectStandardOutput = true;
si.RedirectStandardError = true;
var p = Process.Start(si);
p.WaitForExit();
Logger.WriteInformation("Net use output std:" + p.StandardOutput.ReadToEnd());
Logger.WriteInformation("Net use output err:" + p.StandardError.ReadToEnd());
}
//##########################################################
Logger.WriteInformation("# Done #");
}
catch (Exception e)
{
Logger.WriteError(e);
}
}
}
}
Name it "ReleaseManagementMonitor2.dll"
Place it in the a subfolder to The service "ReleaseManagementMonitor"
Configure the shared path as the solution below states.
DO NOT OVERWITE THE EXISTING "ReleaseManagementMonitor2.dll"
The reason that this works is MEF.
The ReleaseManagementMonitor service tries to load the dll "ReleaseManagementMonitor2.dll" from all subfolders.
This dll implements a service interface that RM recognises.
It the runs "net use" to apply the credentials to the session that the service runs under, and thereby grants access to the otherwise inaccessible webdav server.
This solution is certified "Works on my machine"
RM does work only with UNC, you are right on that.
You can leverage that to make your scenario work -
In Theory
Create a boundary machine on the RM domain, where your drops can be copied.
The deploy action running on your datacenter can then copy bits from this boundary machine, using credentials that have access on that domain. (These credentials are provided by you in the WPF console)
How this works
1. Have a dedicated machine on the RM server domain (say D1) that will be used as a boundary machine.
2. Define this machine as a boundary machine in RM by specifying a shared path that will be used by your data centre. Go to settings tab in your WPF console, create a new variable - { Key = RMSharedUNCPath, Value = \\BoundaryMachine\DropsLocation }. RM now understands you want to use this machine as your boundary machine.
3. Make sure you take care of these permissions
RM Server should have write permissions on the \\BoundaryMachine\DropsLocation share.
Pass down credentials of domain D1 to the target machine in the data centre (Domain D2), that can be used to access the share.
4. Credentials can be passed down fron the WPF console, you will have to define the following two config variables in the settings tab again.
Key = RMSharedUNCPathUser ; Value = domain D1 user name
Key = RMSharedUNCPathPwd ; Value = password for the user defined above.
PS - Variable names are case sensitive.
Also, to let RM know that you want to use the SharedUNC mechanism, check the corresponding checkbox for the RM server and connect to it via IP and not DNS name as these must be in different domains, i.e.
Try to use Get-Content on local-server then Set-Content on the remote server passing the file contents over;
Could package everything into an archive of some kind.
The Release Management is copying VisualStudioRemoteDeployer.exe to C:\Windows\DtlDownloads\VisualStudioRemoteDeployer folder on the target server then is copying the scripts from the specified location to target server using robocopy.
So you have to give permissions from your target server to your scripts location.
Release Management update 4 supports "Build drops stored on TFS servers"
http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx

Setting :deploy_to from server config in Capistrano3

In my Capistrano 3 deployment, I would like to set the set :deploy_to, -> { "/srv/www/#{fetch(:application)}" } so the :deploy_to is different for each server it deploys to.
In my staging.rb file I have:
server 'dev.myserver.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/path'
server 'dev.myserver2.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/other/path'
My question is: would it possible to use the "install_path" I defined, in my :deploy_to? If that's possible, how would you do it?
Finally, after looking around, I came onto an issue from one of the developer of Capistrano, stating specifically that it can't be done
Quote from the Github issue:
Not possible, sorry. fetch() (as is documented widely) reads values
set by set(), the only reason to use set() and fetch() over regular
ruby variables is to provide a consistent API between plugins and
extensions, and because set() can take a Proc to be resolved later.
The variables you are setting in the host object via the server()
command belong to an individual host, some of them, user, roles, etc
have special meanings. For more information see
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md#do-something-different-on-one-host-or-another-depending-on-a-host-property.
If you specifically need to deploy to a different directory on each
machine you probably should not be using the built-in tasks (they
don't fit your needs), and rather copy the deploy.rake from the Gem
into your own project, and modify it as you need. Which in this case
might be to not take fetch(:deploy_to), but to read that from a host
property.
You could try to do something where before doing anything that relies
on calling fetch(:deploy_to), you set() it using the value from
host.someproperty but I'm pretty sure that'll break in exciting and
interesting ways.