Once I have done my deploy I need to update the database structure if any patches need to be applied.
My hosts are:
[qa]
qa1
qa2
[prod]
prod1
prod2
I only want this to be run once per environment based on which environments are being deployed to.
Scenarios:
- All : db patches should be applied once for each environment e.g. qa1 + prod1
- Prod : db patches should be applied to just production e.g.prod1
- QA : db patches should be applied to just qa e.g.qa1
I can use the delegate_to option but how would I cover all scenarios above?
For example if I write: delegate_to: "{{ groups['prod'][0] }}" then qa wouldn't get updated etc.
Thanks
You can write separate playbooks to cover the updates of different environments, specifying which one(s) you want in the hosts variable. For example you would have three playbooks, to cover each permutation of update, each with the following hosts directive set: hosts: qa, hosts: qa:prod, hosts: prod
The other option is to make one playbook to target all groups, hosts: qa:prod, and then use the limit option (--limit/-l) for ansible-playbook to decide which groups to target.
Related
We would like to have some recommendations, since we want to integrate helmfile in our deployment process...
Our infrastructure has following details:
we have many customers
all customers have the same installed services
(each customer get's it's own services, no sharing between customers)
credentials are different for each customer
we prefer a seperate
deployment process (we dont want to upgrade all customers at the same
time)
all customer-config data is seperated into seperate config
files, like:
config/customer1.yaml
config/customer2.yaml
config/customer3.yaml
So I'm wondering, if we should use "Environment" with the customer name, to upgrade it.. or would you recommend another variable?
And do you think it's better to create multiple helmfiles for this process, or just one?
Thank you!
do you think it's better to create multiple helmfiles for this process, or just one?
Using one helmfile for multiple environemnts is quite practical and it saves you writing multiple helmfiles.
we should use "Environment" with the customer name?
For a similar setup (deploying to multiple environements with different values and configurations), I have in Helmfile:
- name: my-app
namespace: "{{ .Namespace }}"
chart: k4r-distance-matrix-api
values:
- my-app/values.yaml ## here go the common values if any exists
- my-app/values.{{ .Environment.Name }}.yaml ## here goes the environment specific values
In the deploy step in my CI I have:
.deploy:
stage: deploy
variables:
ENVIRONMENT: ""
CONTEXT: ""
NAMESPACE: ""
before_script:
- kubectl config use-context $CONTEXT
script:
- helmfile -e "$ENVIRONMENT" --namespace "$NAMESPACE" sync
The production environment has 2 VMs. I want to apply cron role to only one of them. What do I do wrong?
(Ansible is running in AzureDevOps while releasing. All VMs are gathered together in one deployment group and ansible playbook is running on both of them)
ansible-playbook -i production/inventory provision_cron.yml -b
production/inventory file:
[all]
127.0.0.1 ansible_connection=local ansible_user=admin
[cron]
100.100.100.100 ansible_connection=local ansible_user=admin # VM where I want to apply cron role
provision_cron.yml file:
- hosts: cron
user: root
roles:
- cron
- analytics
Run ansible playbook only on one of two machines while release in azure devops
From your description , your VMs are in the same Deployment Group.
To run the ansible playbook on one of them, you need to add tags for each VMs targets in Pipelines -> Deployment Group -> Target Deployment Group -> Targets .
When you use the Deployment Group in Release Pipeline, you could add tags in Required tags
field to filter the target VM .
We have setup a agentpool with 3 agents tagged to it for running tests in parallel. We would like to use various input values for .runsettings file to override test run parameters (overrideTestrunParameters) & distribute our test runs on various agents. e.g.,
Lets assume that the agentpool P1 have associated agents A1, A2, A3.
We need agent A1 to configure a test run parameter executeTests = Functionality1, agent A2 to configure a test run parameter executeTests = Functionality2 etc.,
Please let us know if it is possible to use executionPlan with options Multiagent or Multi Configuration to achieve it.
So if I did not misunderstand, what you want is run the tests with multiple-configuration into multi-agents?
If yes, I'd better suggest you could apply with matrix in pipeline to achieve what you want.
*Note: Matrix is the new feature that only support YAML pipeline. If you want to make use matrix in your side, you had to use YAML to configure your pipeline.*
For how to apply matrix in this scenario, you could refer to below simple sample:
strategy:
matrix:
execTest1:
agentname: "Agent-V1"
executeTests: "Functionality1"
execTest2:
agentname: "Agent-V2"
executeTests: "Functionality2"
execTest3:
agentname: "Agent-V3"
executeTests: "Functionality3"
maxParallel: 3
pool:
name: '{pool name}'
demand:
- agent-name -equals $(agentname)
...
...
With such YAML definition, it can run the job at same time and with different configuration. Also, different configuration run onto the specified agent.
Note: Please ensure your project support parallel consuming.
For more details, see this.
I was able to find a solution for my case here by doing below
Add a variable group in the pipeline named executeTests & assigning names, values for the respective variable group as Functionality1, Functionality2 etc.,
Added multiple agent jobs in the same pipeline & assigned the Override test run parameters with -(test.runsetting variable) $(Functionality1) etc across agents A1, A2, A3
The above does run tests parallelly based on the settings available at each agent job
Using different runsettings or even override settings is not supported. The test task expects it to be consistent across all the agents. It will use whichever is configured for the first to start the test task. For example, if you were to pass an override variable $(Agent.Name), it would use the first agent name regardless of which agent picked it up.
The only way we found to manage this was to handle it in our test framework code. Instead of loading from runsettings, we set environment variables on the agent in a step prior to the test task. Then our test framework will load from the environment variable.
I've been experimenting with writing playbooks for a few days and I'm writing a playbook to deploy an application right now. It's possible that I may be discovering it's not the right tool for the job.
The application is deployed HA across 4 systems on 2 sites and has a worst case SLA of 1 hour. That's being accomplished with a staggered cron that runs every 15 minutes. i.e. s1 runs at 0, s2 runs at 30 s3 runs at 15, ...
I've looked through all kinds of looping and cron and other modules that Ansible supports and can't really find a way that it supports incrementing an integer by 15 as it moves across a list of hosts, and maybe that's a silly way of doing things.
The only communication that these 4 servers have with each other is a directory on a non-HA NFS share. So the reason I'm doing it as a 15 minute staggered cron is to survive network partitions and the death of the NFS connection.
My other thoughts are ... I can just bite the bullet, make it a */15, and have an architecture that relies on praying that NFS never dies which would make writing the Ansible playbook trivial. I'm also considering deploying this with Fabric or a Bash script, it's just that the process for getting implementation plans approved, and for making changes by following them is very heavy, and I just want to simplify the steps someone has to take late at night.
Solution 1
You could use host_vars or group_vars, either in separate files, or directly in the inventory.
I will try to produce a simple example, that fits your description, using only the inventory file (and the playbook that applies the cron):
[site1]
host1 cron_restart_minute=0
host2 cron_restart_minute=30
host3 cron_restart_minute=15
host4 cron_restart_minute=45
[site2]
host5 cron_restart_minute=0
host6 cron_restart_minute=30
host7 cron_restart_minute=15
host8 cron_restart_minute=45
This uses host variables, you could also create other groups and use group variables, if the repetition became a problem.
In a playbook or role, you can simply refer to the variable.
On the same host:
- name: Configure the cron job
cron:
# your other options
minute: "{{ cron_restart_minute }}"
On another host, you can access other hosts variables like so:
hostvars[host2].cron_restart_minute
Solution 2
If you want a more dynamic solution, for example because you keep adding and removing hosts, you could set a variable in a task using register or set_fact, and calculate, for example by the number of hosts in the only group that the current host is in.
Example:
- name: Set fact for cron_restart_minute
set_fact:
cron_restart_minute: "{{ 60 / groups[group_names[0]].length * (1 + groups[group_names[0]].index(inventory_hostname)) | int }}"
I have not tested this expression, but I am confident that it works. It's Python / Jinja2. group_names is a 1 element array, given above inventory, since no host is in two groups at the same time. groups contains all hosts in a group, and then we find its length or the index of the current host by its inventory_hostname (0, 1, 2, 3).
Links to relevant docs:
Inventory
Variables, specifically this part.
Here's some quotes I've found on the web:
Stages:
From Beanstalk blog
"allows you to setup one recipe to deploy your code to more than one
location."
From Github
"we have a production server and a staging server. So naturally, we
would like two deployment stages, production and staging. We also
assume you're creating an application from scratch."
Roles:
From SO (accepted answer)
Roles allow you to write capistrano tasks that only apply to certain
servers. This really only applies to multi-server deployments. The
default roles of "app", "web", and "db" are also used internally, so
their presence is not optional (AFAIK)
In my naivety , these sound like the same thing, could someone please explain the different in a way your grandmother could understand?
P.S I'm deploying PHP if that helps.
Stages are used to deploy different branches to different groups of servers (where a group may be one or more servers).
Roles are used to deploy the same branch to different servers in the same group, and allow you to run certain capistrano commands on certain servers in that group. For example, if you run a DB update task during deploy, you could specify to run it for the :db role only, where :db represents a single server, instead of wasting resources running the same command on two servers for the same result.
This is only really useful when you have multiple servers in a server group (for example, staging1 and staging2, prod1 and prod2). If you have single servers for staging and production, you don't need to worry about roles.
Note that I've also simplified the definition of stages here. You can actually deploy multiple stages to a single server if you need to, by making :deploy_to dependent on the stage.