How a single variable can have different value on the basis of host instance in Ansible? - deployment

I am having "appInstanceName": "{{appname_from_template}}" in my ansible template and I am manitaining appname_from_template in my group_vars/all file.
And when deploying I want this {{appname_from_template}} to be replaced with "endpoint_1" on host1 and "endpoint_2" on host2.
But I am not sure how a single variable can have different value based on the instance it's deploying.
Please let me know if there is any way to do?
Thanks

Given the group_vars/all
shell> cat group_vars/all
appname_from_template: endpoint_maintained_in_groupvars_all
you can override the variable appname_from_template on precedence higher than 7. See Understanding variable precedence. " The variable can have different values based on the instance" if you put it into the
host_vars in the inventory file (precedence 8) or
host_vars in the inventory's directory (precedence 9) or
host_vars in the playbook's directory (precedence 10).
For example
shell> cat host_vars/host1
appname_from_template: endpoint_1
shell> cat host_vars/host2
appname_from_template: endpoint_2
Then the playbook
- hosts: host1,host2,host3
tasks:
- debug:
var: appname_from_template
use host_vars for host1 and host2, and group_vars for host3
ok: [host1] =>
appname_from_template: endpoint_1
ok: [host3] =>
appname_from_template: endpoint_maintained_in_groupvars_all
ok: [host2] =>
appname_from_template: endpoint_2

I solved this by defining my variables for host1 and host2 in host_vars/host1 and host_vars/host2
Example: for host1
Inside host_vars/host1 file
appname_from_template: endpoint_1
Inside host_vars/host2 file
appname_from_template: endpoint_2
And this worked like charm!

Related

Ansible Strange Type Conversion When Using Inventory Files vs. Setting Vars on command line [duplicate]

I have an ansible playbook, which first initializes a fact using set_fact, and then a task that consumes the fact and generates a YAML file from it.
The playbook looks like this
- name: Test yaml output
hosts: localhost
become: true
tasks:
- name: set config
set_fact:
config:
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
- name : gen yaml file
copy:
dest: "a.yaml"
content: "{{ config | to_nice_yaml }}"
Actual Output
When I run the playbook, the output in a.yaml is
A12345: 00000000000000000000000087895423
A12352: 00000000000000000000000087565857
A12353: '00000000000000000000000031200527'
Notice only the last line has the value in quotes
Expected Output
The expected output is
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
All values should be quoted.
I cannot, for the life of me, figure out why only the last line has the value printed in single-quotes.
I've tried this with Ansible version 2.7.7, and version 2.11.12, both running against Python 3.7.3. The behavior is the same.
It's because 031200527 is an octal number, whereas 087895423 is not, thus, the octal scalar needs quoting but the other values do not because the leading zeros are interpreted in yaml exactly the same way 00hello would be -- just the ascii 0 followed by other ascii characters
If it really bothers you that much, and having quoted scalars is obligatory for some reason, to_nice_yaml accepts the same kwargs as does pyyaml.dump:
- debug:
msg: '{{ thing | to_nice_yaml(default_style=quote) }}'
vars:
quote: "'"
thing:
A1234: '008123'
A2345: '003123'
which in this case will also quote the keys, but unconditionally quotes the scalars

Referring to local variables in Concourse credentials file

I have the following credentials.yml file :
test: 123
test2: ((test))
When I upload a pipeline, feeding it with the credentials file , whenever test2 variable is used it is not interpolated and I'm getting a "undefined vars : test" error in Concourse.
Is it even possible to refer to another variable in the very same yaml or do you have to always refer to variables in a configured credentials manager (e.g. Vault) ?
Solved it using anchors and aliases . Sadly keys containing dots or hyphens do not work at all.
e.g. :
test: &test 123
test2: *test

q - cannot load log4q

I would like to use log4q. I downloaded the log4q.q file to my %QHOME% directory. When I try to load the script
C:\Dev\q\w32\q.exe -p 5000
q) \l log4q.q
I get
'
[0] (<load>)
)
When I try the same in qpad after connecting to localhost server I get
'.log4.q
(attempt to use variable .log4.q without defining/assigning first (or user-defined signal))
which I find strange because I can switch to non-existing namespaces in the console without any issues.
Thanks for the help!
It looks like a typo in the first line stemming from a recent change of namespace from .l to .log4q
I think the first line should be:
\d .log4q
not
\d .log4.q

How can I hide skipped tasks output in Ansible

I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
Since ansible 2.4, a callback plugin name full_skip was added to suppress the skipping of task names and skipping keyword in the ansible output. You can try the below ansible configuration:
[defaults]
stdout_callback = full_skip
Ansible allows you to control its output by using custom callbacks.
In this case you can simply use the skippy callback which will not output anything on a skipped task.
That said, skippy is now deprecated and will be removed in ansible v2.11.
If you don't mind losing colours you can elide the skipped tasks by piping the output through sed:
ansible-playbook whatever.yml | sed -nr '/^TASK/{h;n;/^skipping:/{n;b};H;x};p'
If you are using roles, you can use when to cancel the include in main.yml
# roles/myrole/tasks/main.yml
- include: somefile.yml
when: somevar is defined
# roles/myrole/tasks/somefile.yml
- name: this task will only run (and be seen in the output) if somevar is defined
debug:
msg: "Hello World"

How to copy a directory from one host to another host?

I want to copy a directory from one host to another host using SCP
I tried with following syntax
my $src_path="/abc/xyz/123/";
my $BASE_PATH="/a/b/c/d/";
my $scpe = Net::SCP::Expect->new(host=> $host, user=>$username, password=>$password);
$scpe->scp -r($host.":".$src_path, $dst_path);
i am getting the errror like no such file or directory.can you help in this regard.
According to the example given in the manpage, you don't need to repeat the host in the call, if you already passed it as an option.
from http://search.cpan.org/~djberg/Net-SCP-Expect-0.12/Expect.pm:
Example 2 - uses constructor, shorthand scp:
my $scpe = Net::SCP::Expect->new(host=>'host', user=>'user', password=>'xxxx');
$scpe->scp('file','/some/dir'); # 'file' copied to 'host' at '/some/dir'
Besides, is this "-r" a typo? If you want to copy recursively, you need to set recursive => "yes" in the options hash.