How to handle changes to init scripts in Ansible? - init

I'm relatively new to Ansible and I've created a playbook that can install a Tomcat configuration on a 'bare' server. I'm wondering how to solve the problem of being able to update the init.d script while avoiding the stop the service at the start of the playbook when there is no change in the script. Here's the basic playbook:
- name: stop tomcat service
service: name=my_service state=stopped
- name: copy init.d script
template: src=script.j2 dest=/etc/init.d/my_service
- name: do other tasks here
- name: start tomcat service
service: name=my_service state=restarted
This playbook will always stop and start the service, even if there are no changes.
What I want the playbook to do is only stop and start the service when there are actual changes.
I know I could use handlers (need to look into that more), but I need to stop the service using the OLD init.d script, before copying the NEW script. AFAIK the handlers respond to the result of a task AFTER the action has taken place, which would mean the new script is already copied over the old one and might prevent the service from stopping and restarting.
How do I handle that?

Any task that is set to notify a handler will do exactly that at the end of the play.
http://docs.ansible.com/playbooks_best_practices.html#task-and-handler-organization-for-a-role
- name: Copy init.d script
template: src=script.j2 dest=/etc/init.d/my_service
notify: start tomcat service
handlers:
- name: start tomcat service
service: name=my_service state=restarted
You may want to have a play to work with the old script with a handler that stops the services with the old script, and a different play copying the new script with handlers.

From what I've learned from the comments above, I guess the best configuration of this playbook should be something like the one below. I still don't get how to stop the service in time for the copy init script task to run, but only when the task will run.
- tasks:
- name: do various tasks here
notify: restart tomcat service
- name: stop tomcat service
service: name=tomcat state=stopped
when: {{ indicator_init_script_task_will_fire }}
- name: copy init.d script
notify: restart tomcat service
handlers:
- name: restart tomcat service
service: name=my_service state=restarted
I haven't found what the indicator should be. So feel free to update.

Related

Zero downtime Google Compute Engine (GCE) deployment

I'm trying to deploy this docker GCE project in a deploy.yaml but every time I update my git repository, the server goes down due to 1.
The original instance being deleted and 2. The new instance hasn't finished starting up yet (or at least the web app hasn't finished starting up yet).
What command should I use or how should I change this so that I have a canary deployment that destroys the old instances once a new one is up (I only have one instance running at a time)? I have no health checks on the instance group, only the load balancer.
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'instance-groups', 'managed', 'rolling-action', 'replace', 'a-group', '--max-surge', '1']
Thanks for the help!
Like John said - you can set max-unavailable and max-surge variables to alter the behavior of your deployment during updates.

Exclude services from starting with docker-compose

Use Case
The docker-compose.yml defines multiple services which represent the full application stack. In development mode, we'd like to dynamically exclude certain services, so that we can run them from an IDE.
As of Docker compose 1.28 it is possible to assign profiles to services as documented here but as far as I have understood, it only allows specifying which services shall be started, not which ones shall be excluded.
Another way I could imagine is to split "excludable" services into their own docker-compose.yml file but all of this seems kind of tedious to me.
Do you have a better way to exclude services?
It seems we both overlooked a certain very important thing about profiles and that is:
Services without a profiles attribute will always be enabled
So if you run docker-compose up with a file like this:
version: "3.9"
services:
database:
image: debian:buster
command: echo database
frontend:
image: debian:buster
command: echo frontend
profiles: ['frontend']
backend:
image: debian:buster
command: echo backend
profiles: ['backend']
It will start the database container only. If you run it with docker-compose --profile backend up it will bring database and backend containers. To start everything you need to docker-compose --profile backend --profile frontend up or use a single profile but several times.
That seems to me as the best way to make docker-compose not to run certain containers. You just need to mark them with a profile and it's done. I suggest you give the profiles reference a second chance as well. Apart from some good examples it explains how the feature interacts with service dependencies.

How to terminate another container in the same pod when your main container finish its job

Using OpenShift 3.9, I run a daily CronJob that consists of 2 containers:
A Redis server
A Python script that uses the Redis server
When the python script finishes its execution, the container is terminated normally but the Redis server container stays up.
Is there a way to tell the Redis server container to automatically terminate its execution when the python script exit? Is there an equivalent to the depends_on of docker compose?
Based on Dawid Kruk comment, I added this line at the end of my python script to shutdown the server:
os.system('redis-cli shutdown NOSAVE')
It effectively terminates the container.

google container startup script

I have created /usr/startup.sh script in google container which would like to execute it on startup of every pod.
I tried it doing it through command in yaml like below.
command: "sh /usr/start.sh"
command: ["sh", "-c", "/usr/start.sh"]
Please let me if there is any kind of way that can execute defined script at the startup in google container/pod.
You may want to look at the postStart lifecycle hook.
An example can be found in the kubernetes repo:
containers:
- name: nginx
image: resouer/myapp:v6
lifecycle:
postStart:
exec:
command:
- "cp"
- "/app/myapp.war /work"
Here are the API docs:
// Lifecycle describes actions that the management system should take in response to container lifecycle
// events. For the PostStart and PreStop lifecycle handlers, management of the container blocks
// until the action is complete, unless the container process fails, in which case the handler is aborted.
type Lifecycle struct {
// PostStart is called immediately after a container is created. If the handler fails, the container
// is terminated and restarted.
PostStart *Handler `json:"postStart,omitempty"`
// PreStop is called immediately before a container is terminated. The reason for termination is
// passed to the handler. Regardless of the outcome of the handler, the container is eventually terminated.
PreStop *Handler `json:"preStop,omitempty"`
}
Startup scripts run on node startup, not for every pod. We don't currently have a "hook" in kubelet to run whenever a pod starts on a node. Can you maybe explain what you're trying to do?

Ansible custom tool ,retry and redeploy

I am trying to use Ansible as a deployment tool for a set of hosts and I'm not able to find the right way of doing it.
I want to run a custom tool that installs rpm in a host.
Now I can do
ansible dev -i hosts.txt -m shell -a "rpmdeployer --install package_v2.rpm"
But this doesn't give a retry file(failed hosts)
I made a playbook to get the retry file
I tried a simple playbook
---
- hosts: dev
tasks:
- name: deployer
command: rpmdeployer --install package_v2.rpm
I know this not in the spirit of ansible to execute custom commands and scripts. Is there a better way of doing this? Also is there a way to keep trying till all hosts succeeds?
Is there a better way of doing this?
You can write a custom module. The custom module could even be the tool, so you get rid of installing that dependency. Modules can be written in any language but it is advisable to use Python because:
Python is a requirement of Ansible anyway
When using Python you can use the API provided by Ansible
If you'd have a custom module for your tool your task could look like this:
- name: deployer
deployer: package_v2.rpm
Also is there a way to keep trying till all hosts succeeds?
Ansible can automatically retry tasks.
- name: deployer
command: rpmdeployer --install package_v2.rpm
register: result
until: result | success
retries: 42
delay: 1
This works, given your tool returns correct exit codes (0 on success and >0 on failure). If not you can apply any custom condition, e.g. search the stdout for content etc.
I'm not aware of a tool to automatically retry when the playbook actually failed. But it shouldn't be too hard create a small wrapper script to check for the retry file and run the playbook with --limit #foo.retry until it is not re-created.
But I'm not sure that makes sense. If installing an rpm with your tool fails, I guess it is guaranteed it will also fail on any retries, unless there are unknown components in the play like downloading the rpm in the first place. So of course the download could fail then and a retry might succeed.