Ansible inventory.
Today it like this
hosts:
xxxxxxx.ysp.se:
I want to add the host via variable like
hosts:
"{{ host_1 }}:"
In other yaml file
host_1: xxxxxxx.ysp.se
action.yml
hosts:
host_one:
brokers:'
hosts.yml //same dir
Host alias
hosts:
host_one:
ansible_host: uxxxxx.ii.sys.blue.com
Use of aliases solved the problem.
Related
I have a playbook and only want to run this play on the first master node. I tried moving the list into the role but did not see to work. Thanks for your help!
## master node only changes
- name: Deploy change kubernetes Master
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
delegate_to: "{{ groups['masters'][0] }}"
ERROR! 'delegate_to' is not a valid attribute for a Play
The error appears to be in '/mnt/win/kubernetes.playbook/deploy-kubernetes.yml': line 11, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
master node only changes
name: Deploy change kubernetes Master
^ here
In one playbook, create a new group with this host in the first play and use it in the second play. For example,
shell> cat playbook.yml
- name: Create group with masters.0
host: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ groups.masters.0 }}"
groups: k8s_master_0
- name: Deploy change kubernetes Master
hosts: k8s_master_0
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
(not tested)
Fix the role name
If files_location is a variable which shall be used in the role's scope put it into the vars. For example
roles:
- role: gd.kubernetes.master.role
vars:
files_location: ../files
on my hosts file, I have like 10 different groups that each has devices in it. each customer deployment, should go to a specific region and I want to specify that in a customer config file.
In my playbook, I tried to use a variable in front of hosts and my plan was to specify the hosts group in the config file.
master_playbook.yml
hosts: "{{ target_region }}"
vars:
custom_config_file: "./app_deployment/customer_config_files/xx_app_prod.yml"
xx_app_prod.yml
customer: test1
env: prod
app_port: 25073
target_region: dev
Error message I get:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'target_region' is undefined
To determine a HOST(who is not the running host) in which groups he is in u have to use a little helper:
Create a script:
#!/usr/bin/env ansible-playbook
#call like: ./showgroups -i develop -l jessie.fritz.box
- hosts: all
gather_facts: no
tasks:
- name: show the groups the host(s) are in
debug:
msg: "{{group_names}}"
After that u can run a Playbook Like:
- name: "get group memberships of host"
shell: "{{ role_path }}/files/scripts/show_groups -i {{ fullinventorypath }} -l {{ hostname }}"
register: groups
- name: "create empty list of group memberships"
set_fact:
memberships: []
- name: "fill list"
set_fact:
memberships: "{{ memberships + item }}"
with_items: groups.stdout.lines
I'm writing an Ansible-playbook to insert a list of secret object into Kubernetes.
I'm using k8s_raw syntax and I want to import this list from a group_vars file.
I can't find the right syntax to import the list of secret into my data field.
playbook.yml
- hosts: localhost
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
SKRT: "c2trcnIK"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
vars_files:
- "varfile.yml"
varfile.yml
secrets:
TAMAGOTCHI_CODE: "MTIzNAo="
FRIDGE_PIN: "MTIzNAo="
First, what does it actually say when you attempt the above? It would help to have the result of your attempts.
Just guessing but try moving the var_files to before the place where you try to use the variables. Also, be sure that your indentation is exactly right when you do.
- hosts: localhost
vars_files:
- /varfile.yml
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
Reference
side note: I would debug this immediately without attempting the task. Remove your main task and after trying to use vars_files, attempt to directly print the secrets using the debug play. This will allow you to fine tune the syntax and keep fiddling with it until you get it right without having to run and wait for the more complex play that follows. Reference.
To import this list from a group_vars file
Put the localhost into a group. For example a group test
> cat hosts
test:
hosts:
localhost:
Put the varfile.yml into the group_vars/test directory
$ tree group_vars
group_vars/
├── test
└── varfile.yml
Then running the playbook below
$ cat test.yml
- hosts: test
tasks:
- debug:
var: secrets.TAMAGOTCHI_COD
$ ansible-playbook -i hosts test.yml
gives:
PLAY [test] ***********************************
TASK [debug] **********************************
ok: [localhost] => {
"secrets.TAMAGOTCHI_CODE": "MTIzNAo="
}
PLAY RECAP *************************************
localhost: ok=1 changed=0 unreachable=0 failed=0
The problem was the SKRT: "c2trcnIK" field just under the "{{ secrets }}" line. I deleted it and now it works ! Thank you all.
I am setting up a Grafana server on my local kube cluster using helm-charts. I am trying to get it to work on a subpath in order to implement it on a production env with tls later on, but I am unable to access Grafana on http://localhost:3000/grafana.
I have tried all most all the recommendations out there on the internet about adding a subpath to ingress, but nothing seems to work.
The Grafana login screen shows up on http://localhost:3000/ when I remove root_url: http://localhost:3000/grafana from Values.yaml
But when I add root_url: http://localhost:3000/grafana back into values.yaml file I see the error attached below (towards the end of this post).
root_url: http://localhost:3000/grafana and ingress as:
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana
hosts:
- localhost
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
I expect the http://localhost:3000/grafana url to show me the login screen instead i see the below errors:
If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath
3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
4. Sometimes restarting grafana-server can help
Can you please help me fix the ingress and root_url on values.yaml to get Grafana URL working at /grafana ?
As you check documentation for Configuring grafana behind Proxy, root_url should be configured in grafana.ini file under [server] section. You can modify your values.yaml to achieve this.
grafana.ini:
...
server:
root_url: http://localhost:3000/grafana/
Also your ingress in values should look like this.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana/
hosts:
- ""
Hope it helps.
I followed exact steps mentioned by #coolinuxoid however, I still faced issue when trying to access UI by hitting http://localhost:3000/grafana/
I got redirected to http://localhost:3000/grafana/login with no UI displayed.
A small modification helped me achieve accessing UI through http://localhost:3000/grafana/
In the grafana.ini configuration, I added "serve_from_sub_path: true", so my final grafana.ini looked something like this:
grafana.ini:
server:
root_url: http://localhost:3000/grafana/
serve_from_sub_path: true
Ingress Configuration were exactly same. If it is version specific issue, I cannot be sure but I'm using Grafana v8.2.1.
You need to tell the grafana application, that it is run not under the root url / (the default), but under some subpath. The easiest way is via GF_ prefixed env vars:
grafana:
env:
GF_SERVER_ROOT_URL: https://myhostname.example.com/grafana
GF_SERVER_SERVE_FROM_SUB_PATH: 'true'
ingress:
enabled: true
hosts:
- myhostname.example.com
path: /grafana($|(/.*))
pathType: ImplementationSpecific
Above example works for the kubernetes' nginx-ingress-controller. Depending on the ingress controller you use, you may need
path: /grafana
pathType: Prefix
instead.
I want to force Ansible to gather facts about hosts inside playbook (to use those data inside role) regardless --limit, but don't know how.
I have playbook like this:
- hosts: postgres_access
tasks:
- name: Gathering info
action: setup
- hosts: postgres
roles:
- postgres
Inside 'postgres' role I have template which iterates over default IPs:
{% for host in groups['postgres_access'] %}
host all all {{hostvars[host].ansible_default_ipv4.address}}/32 md5
{% endfor %}
This works like magic, but only if I run my playbook without --limit. If I use --limit it breaks, because some hosts in hostgroup have no gathered facts.
ansible-playbook -i testing db.yml --limit postgres
failed: [pgtest] (item=pg_hba.conf) => {"failed": true, "item": "pg_hba.conf", "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'ansible_default_ipv4'"}
How can I have --limit to reconfigure only postgres host, and have network data from other hosts (without doing all other configuration stuff?).
Try this please !
- hosts: postgres
pre_tasks:
- setup:
delegate_to: "{{item}}"
with_items: "{{groups['postgres_access']}}"
roles:
- postgres
You can run setup for the hosts in the postgres_access group as a task and save the facts using register:
- name: setup hosts
action: "setup {{ item }} filter=ansible_default_ipv4"
with_items: groups.postgres_access
register: ip_v4
- name: create template
template: src=[your template] dest=[your dest file]
Just keep in mind that the template needs to change how you are referencing the hosts ipv4 address, I tried with something like this:
{% for item in ip_v4.results %}
host all all {{ item.ansible_facts.ansible_default_ipv4.address }}/32 md5
{% endfor %}
For printing just the IP of each host in the group
Try this:
- hosts: postgres
pre_tasks:
- setup:
delegate_to: postgres_access
roles:
- postgres
Use the same role which you have defined as it is with ignore_errors: true . So that for hosts which do not have gathered data will not fail.
And if you want data to be gathered for all the hosts in both groups postgres_access and postgres, then add gather_facts: true to get facts for postgres group and for postgres_access you already have a task written.
- hosts: postgres_access
tasks:
- name: Gathering info
action: setup
- hosts: postgres
gather_facts: true
roles:
- postgres
ignore_errors: true