How do I access the current host's roles in Capistrano? - capistrano

I would like to access the current host/remote machines roles in a yml template file when deploying with capistrano.
What I've tried
I've tried experimenting to see if there's a host variable available but that doesn't seem to have any effect.
$CAPISTRANO:HOST$ seems to be for older versions of capistrano.
How could I do something like this pseudocode in a templated yaml file?
production:
app_name: <%= TODO: IF HOST A/b/c -- App 1. If host Z -- app 2 %>

From what I understand, I have handled a similar problem programatically.
As you may know for each environment there is a .rb file in the config/deploy/ directory.
In our problem, we had a YAML file that had host and role information, viz. below:
A:
- web
- db
- mongo
B:
- web
- app
To apply this role specific to each host, we iterated over this YAML file (ruby has easy to follow instructions on that) and added a line of code. Here is the psuedocode:
for host in HOST_LIST_YAML do
for role in ROLE_LIST do
role :role, %w{USER#host}
end
end
Here, all the role mappings are applied to the hosts. Similarly, you can write this code on your environment.rb file.

Related

Specify postgresql database name in cloud foundry manifest.yml

Is there a way to specify a postgresql database name to connect to in the cloud foundry manifest.yml file? I've been raking through the documentation and haven't yet found this specific information.
I'm imagining something like this:
applications:
- name: my-app
routes:
- route: my-app.mybluemix.net
services:
- postgres
dbname: database2
With that approach, a postgresql connection can be made by just the connection string provided by VCAP_SERVICES parsing modules (cfenv in the case of node).
If this is not possible, I will just set a dbname environment variable and build my own connection string.
There is nothing like that in a Cloud Foundry application manifest.yml.
The manifest.yml only takes a list of service instance names and the services with those names will be bound to your app. It does not allow you set other metadata.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block
I don't know if these will help, but when you cf bind-service directly there are two additional provisions you can make use of (these are not supported by manifest.yml as of me writing this):
Arbitrary bind parameters. These probably won't help unless your service broker supports them, but it's a way to pass additional info to the service broker. If your broker supported it, you could in theory say give me a database named XYZ by passing it some config this way.
Named service bindings. This provides what amounts to a second name. The intent is that you can create the service with a name of X, but your application can look for a service binding with name Y. You can use this to swap in differently named services, but still expose the same binding name to the application so it will always find the service.
If you are trying to pass in some other service instance related metadata to your application, you'd need to do it some other way. Like if you want to tell it the database name or the connection pool size, etc.. Using environment variables like you mentioned is one option. You could use a config file or cli arguments passed to your application. What you pick is probably a matter of preference/support in the library/framework you're using.
For what it's worth, most service brokers I've seen pass in and tell you a specific database name to use. If the broker said connect to db XYZ and you made your connection to myCoolDb, the connection would fail. Just wanted to mention this. Your mileage may vary.

Hyperledger Fabric CA - Storing the identity materials the correct way

Currently I have a VM running and installed the binaries needed for fabric-ca. I have a docker-compose file looking like this:
I have some questions regarding this:
the docker-compose file will create one container, if I want it for
more organizations, do I need to copy/paste this and change the port
number? (I don't want to use intermediate CAs).
When registering/enrolling an identity, it will override the default
materials because It will always put the materials from the new identity in /etc/hyperledger/fabric-ca-client. So when creating multiple
identities (orderer, peers, users etc..) how do I need to organize
them? What's the best practise?
In the image you can see that the server and clients are specified,
is this a good approach? Or should the client and the server be a
different container?
More than one CA in a Docker Compose file - you can look at the Build your first network tutorial in the Fabric Docs which has a 2 Org network and various configuration files including Docker Compose.
Combined client/server Container - This might be convenient for testing, but in a production scenario definitely not for Security and Operational Integrity reasons.
Overwriting Identities - the enroll command writes a tree of data to the location specified by the environment variable FABRIC_CA_CLIENT_HOME but you can use --home to redirect the tree to a different location:
fabric-ca-client enroll -u http://Jane:janepw#myca.example.com:7054 --home /home/test/Jane/

Ansible deploy security certificates to remote servers

I'm a bit stumped on an Ansible issue. I've gotten a portion of my setup script working for my database servers, and I would like Ansible to be able to manage each server's postgresql.conf file. I currently have it pushing out an up-to-date copy of the config file, but this has presented a problem.
Our security certificates are unique to each server, and the postgresql.conf has parameters for setting these up for each server. I've currently got Ansible calculating the proper initial values for things like shared_buffers and effective_cache_size, but do not know how to get it to push a unique certificate out to each remote server, or to uniquely set the name in the config file to match the certificate name.
Are these even possible with Ansible?
I had a similar requirement recently to deploy a specific file (Java keystore) per host, along with the relevant keystore password to decrypt and use it.
For the per-host keystore password, I used host_vars: inside the group host file inventory/demoGroupName/hosts:
hostname1.com keystore_password="{{ vault_keystore_password_hostname1.com }}"
hostname2.com keystore_password="{{ vault_keystore_password_hostname2.com }}"
and then in inventory/demoGroupName/vault:
vault_keystore_password_hostname1.com=superSecurePassword
vault_keystore_password_hostname2.com=nonRepeatedPassword
(Note: the use of vault_ as a prefix is recommended in Ansible's best practices, but feel free to modify this to suit your scenario)
and then in the job, simply place {{ keystore_password }} in the relevant spot, as an example:
- name: arbitrary tasks
command: configure-keystore --password="{{ keystore_password }}"
Now in my scenario, I placed the individual keystores into the roles/role_name/files directory and copied it across by substituting in the {{ inventory_hostname }} as part of the filename, but this answer gives a better solution in my opinion. Either will work for your situation, but the latter is probably better long-term. If the name of the certificate can be made the same for all hosts, that will also simplify your situation somewhat.

How to read multiple config file from Spring Cloud Config Server

Spring cloud config server supports reading property files with name ${spring.application.name}.properties. However I have 2 properties files in my application.
a.properties
b.properties
Can I get the config server to read both these properties files?
Rename your properties files in git or file system where your config server is looking at.
a.properties -> <your_application_name>.properties
a.properties -> <your_application_name>-<profile-name>.properties
For example, if your application name is test and you are running your application on dev profile, below two properties will be used together.
test.properties
test-dev.properties
Also you can specify additional profiles in bootstrap.properties of your config client to retrieve more properties files like below. For example,
spring:
profiles: dev
cloud:
config:
uri: http://yourconfigserver.com:8888
profile: dev,dev-db,dev-mq
If you specify like above, below all files will be used together.
test.properties
test-dev.properties
test-dev-db.prpoerties
test-dev-mq.properties
Note that the provided answer assumes your property files address different execution profiles. If they dont, i.e., your properties are split into different files for some other reason, e.g., maintenance purposes, divided by business/functional domain, or any other reason that suits your needs, then, by defining a profile for each such file, you are just "abusing" the profile feature, for achieving your goal (multiple property files per app).
You could then ask "OK, so what is the problem with that?". The problem is that you restrain yourself from various possibilities that you would otherwise have. If you actually want to customize your application configuration by profile you will have to create pseudo, sub, profiles for that since the file name is already a profile. Example:
Your application configuration could be customized by different profiles, which you use inside your springboot application (e.g. in #Profile() annotation), let them be dev, uat, prod. You can boot your application setting different profiles as active, e.g. 'dev' vs 'uat', and get the group of properties that you desire. For your a.properties b.properties and c.properties file, if different file names were supported, you would have a-dev.properties b-dev.properties and c-dev.properties files vs a-uat.properties b-uat.properties and c-uat.properties files for 'dev' and 'uat' profile.
Nevertheless, with the provided solution, you already have defined 3 profiles for each file: appname-a.properties appname-b.properties, and appname-c.properties: a, b, and c. Now imagine you have to create a different profile for each... profile(! it already shows something goes wrong here)! you would end up with a lot of profile permutations (which would get worse as files increase): The files would be appname-a-dev.properties, appname-b-dev.properties, app-c-dev.properties vs appname-a-uat.properties, appname-b-uat.properties, app-c-uat.properties, but the profiles would have been increased from ['dev', ' uat'] to ['a-dev', 'b-dev', 'c-dev', 'a-uat', 'b-uat', 'c-uat'] !!!
Even worse, how are you going to cope with all these profiles inside your code and more specifically your #Profile() annotations? Will you clutter the code space with "artificial" profiles just because you want to add one or two more different property files? It should have been sufficient to define your dev or uat profiles, where applicable, and define somewhere else the applicable property file names (which could then be further supported by profile, without any other configuration action), just as it happens in the externalized properties configuration for individual springboot apps
For argument completeness, I will just add here that if you want to switch to .yml property files one day, with the provided profile-based naming solution, you also loose the ability to define different "yaml document sections per profile" inside the same .yml file (Yes, in .yml you can have one property file yet define multiple logical yml documents inside, which its usually done for customizing the properties for different profiles, while having all related properties in one place). You loose the ability because you have already used the profile in the file name (appname-profile.yml)
I have issued a pull request with a minor fix for spring-cloud-config-server 1.4.x, which allows defining additionally supported file names (appart from "application[-profile]" and "{appname}[-profile]", that are currently supported) by providing a spring.cloud.congif.server.searchNames environment property - analogous to spring.config.name for springboot apps. I hope it gets reviewed and accepted.
I came across the same requirement lately with a little more constraint that I am not allowed to play around the environment profiles. So I wasn't allowed to do as the accepted answer. I'm sharing how I did it as an alternative to those who might have same case as me.
In my application, I have properties such as:
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
application.properties // for my use, contains local properties only
bootstrap.properties // for my use, contains management properties only
In my application, I have these particular properties set that allow me to achieve what I needed. But note I have the rest of needed config as well (enable cloud config, actuator refresh, eureka service discovery and so on) - just highlighting these for emphasis:
spring.application.name=appxyz
spring.cloud.config.name=appxyz-data-soures,appxyz-interfaces,appxyz-feature
You can observe that I didn't want to play around my application name but instead I used it as prefix for my config property files.
In my configuration server I configured in application.yml to capture pattern: 'appxyz-*':
spring:
cloud:
config:
server:
git:
uri: <git repo default>
repos:
appxyz:
pattern: 'appxyz-*'
uri: <another git repo if you have 1 repo per app>
private-key: ${git.appxyz.pk}
strict-host-key-checking: false
ignore-local-ssh-settings: true
private-key: ${git.default.pk}
In my Git repository I have the following. No application.properties and bootstrap because I didn't want those to be published and overridden/refreshed externally but you can do if you want.
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
It will be the pattern matching pattern: 'appxyz-*' that will capture and return the matching files from my git repository. The profile will also apply and fetch the correct property file accordingly. The prioritization of value is also preserved.
Furthermore, if you wish to add more file in your application (say appxyz-circuit-breaker.properties), we only need to do:
Add the name pattern in the spring.cloud.config.name=...,appxyz-circuit-breaker
The add the copies of the file locally and also externally (in the git repo.
No need to add/modify more or restart your configuration server later on. For new application, it's like a one time registration thing to add an entry under the repos of application.yml.
Hope it helps in one way or another!
In your application bootstrap.properties, you have to specify like below:
spring.application.name=a,b

Chef LWRP, Definition, or Cookbook for abstracting creation of Nginx virtual hosts

I'm trying to figure out the correct way to architect a solution to automatically configure new Rails App servers.
I've looked at the chef-rails cookbook and it seems a little verbose. In our case we always deploy Nginx a certain way, always perform backups a certain way, etc, so much of the configuration would be redundant from one node definition to the next.
My goal is to be able to create a new Rails App server by defining just the following information.
wh_webhead "test_app" do
ssl :enable
backups :enable
passenger :enable
ruby_version 2.0.0
db_type :mysql
db_user "testuser"
db_pass "3207496r9w6"
nagios_ssl_string_match "login"
end
Then I would like Chef to perform the following actions:
Create user accounts
Setup box and install
Install Nginx w/wildcard SSL cert
Configure log rotation
Setup firewall rules to allow traffic to ports 80 and 443
Install Passenger and RVM with Ruby 2.0.0
Create Rails app dirs following template (e.g. /opt/local/test_app)
Create new database on MySQL server, grant access, and setup firewall rules
Create firewall rules for Nagios and configure Nagios to monitor:
port 80 for redirection to port 443
port 443 for HTTP 200 status
port 443 for the text "login"
Configure backups for app dir (e.g. /opt/local/test_app)
I'm already using the community cookbooks for Nginx, Nagios, Ufw, etc and have created recipes in a custom cookbook to configure Mysql and Nginx. There's just a lot of duplicate code from one app's Nginx/Mysql cookbook to the next.
What I'm struggling with is where to use Cookbooks, Recipes, LWRPs and Definitions to properly abstract this.
Should I put the default configuration for Nginx and Mysql in Definitions and then use those in recipes or create custom wrapper cookbooks with the defaults?
First, take a look at the application_ruby and artifact cookbook, both of which can automate these workflows for you.
I specifically enjoy using the artifact cookbook, as it provides a lot of flexibility, but the application_ruby cookbook has built-in support for Passenger, Unicorn and other tools you'd normally find in a Rails application requirements.
As for your question regarding Cookbooks, Recipes, LWRPs and Definitions I would definitely look at #sethvargo's answer at https://stackoverflow.com/a/21733093/747032. It provides a good guide on what to use when, from an employee at Opscode (now called Chef (the company)), and someone who is constantly involved in the Chef community and thus has excellent knowledge on this topic.
As far as my advice (which I'll keep concise):
Use LWRP's to wrap a lot of resources that are always called together, for example, we use an "AWS EBS" LWRP, to create, mount and format new EBS'.
Use recipes to call on all your LWRP's (both custom and public) and resources.
Don't use definitions, they are really deprecated by LWRP's in my opinion.