It seems that the ansible lookup plugin does not adhere to privilege escalation and it is not clear to me if this is by design.
I have looked for an answer to this, but though I have found many similar questions, I haven't yet seen one which seems to answers why the following playbook behaves like it does.
---
- hosts: localhost
become: 1
tasks:
- name: cat file
command: cat /home/bob/.ssh/id_rsa.pub
register: cat
- debug:
msg: |
dog: {{ cat.stdout }}
- name: add the variable
set_fact:
rsa_key: "{{ lookup('file', '/home/bob/.ssh/id_rsa.pub') }}"
delegate_to: localhost
The result of running this play is that the command module "works" while the lookup module does not:
PLAY [localhost] *************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [cat file] **************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug] *****************************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "dog: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl+xAFC2hwsLaWvCEOFHEz96AU8ltF1fA8ZNQp9Mkl6FFZUEFu2rAl+imSXm+xAPrWhqOoLgkYZKq6qAsqG3SqSisrr4uHGdC4F/5NBlgR7OqfAU76VfJRmcq4F01caXBJVuciZ0EX7KQcC6ixNpZweLPoRDBNntDJnDKVIbx8h7w3qAYRbYOsLv6OT7BLgldSrJSOYBOJ0/SLZIUDAvewPnPppkwZgMAMV12bXHzn5Imsn9S6K5riZ/n3oenOgW787w5XQI0xKsxO6g4NjzciMELafXfoq07+Gz53NMyo9/DHag2w8y6m+Js4axazMFFgcnS3Hrbc/tSejvarEynEktN1/+JTu8eEdKxtZYr2ez55SW+MOxZr14isQJDc0btduO4yJfXvJ6KooULVbqZyVnmun6pKgecsCDTy6kYQVV0oJgpixiquoLAMPN+nKzufaSgGTRbKnQuf+7w6X94ci3iIkpS7qxvQsZ/P61q7uQjhtsmG6qsk6/M9nIruJY0= ansible-generated on rh1.local.home\n"
}
TASK [add the variable] ******************************************************************************************************************************************************************************************************************************************************
[WARNING]: Unable to find '/home/bob/.ssh/id_rsa.pub' in expected paths (use -vvvvv to see paths)
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/bob/.ssh/id_rsa.pub"}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I'm running this under a user, which is NOT "bob", and with become_user = root and become_method = sudo. Any ideas or a confirmation that indeed the lookup plugin ignores privilege escalation statements, would be appreciated.
Looking into to the actually code and commands to be found for the standard plugins for ansible I found this:
As all lookups, this runs on the Ansible host as the user running the playbook, and "become" does not apply, the target file must be readable by the playbook user, or, if it does not exist, the playbook user must have sufficient privileges to create it. (So, for example, attempts to write into areas such as /etc will fail unless the entire playbook is being run as root).'
Inside the password plugin. Hence, lookup plugins indeed ignore "become" escalation directives and hence it is "by design" this behavior though I find it rather counter intuitive, not well documented, and dislike it since it forces me to write ugly code to get around it ;-).
Related
We try to configure an Azure VM using an Azure DevOps pipeline. We first create the machine using Terraform and then we need to configure it. Right now the pipeline is functional when we use a customized Ubuntu Azure DevOps agent (a VM we setup ourselves in Azure).
We prefer to use a Microsoft Hosted Ubuntu Agent. When we try to run our pipeline using the Microsoft Hosted Ubuntu agent we fail with a message "winrm or requests is not installed".
We have done a lot of research and attempts to install the needed components, but none have been fruitful.
All the examples and documentation on the internet we can find don't mention our specific use case. Ansible configuration of Windows VMs in Azure from a Microsoft Hosted Ubuntu agent. Isn't it possible for some reason?
If it is, any pointers in the right direction will be much appreciated!
The error we see in the Azure DevOps pipeline is this:
ansible-playbook -vvvv -i inventory/hosts.cfg main.yml --extra-vars '{"customer_name": "<REMOVED>" }'
ansible-playbook [core 2.12.5]
config file = None
configured module search path = ['/home/vsts/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vsts/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/vsts/.ansible/collections:/usr/share/ansible/collections
executable location = /home/vsts/.local/bin/ansible-playbook
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
auto declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
yaml declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
Parsed /home/vsts/work/1/s/ansible/inventory/hosts.cfg inventory source with ini plugin
Loading collection ansible.windows from /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/ansible/windows
Loading collection community.windows from /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/community/windows
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
Loading callback plugin default of type stdout, v2.0 from /home/vsts/.local/lib/python3.8/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: main.yml *************************************************************
Positional arguments: main.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/vsts/work/1/s/ansible/inventory/hosts.cfg',)
extra_vars: ('{"customer_name": "<REMOVED>"}',)
forks: 5
1 plays in main.yml
PLAY [windows:pro] *********************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/vsts/work/1/s/ansible/main.yml:1
redirecting (type: modules) ansible.builtin.setup to ansible.windows.setup
Using module file /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
Pipelining is enabled.
**fatal: [51.144.125.149]: FAILED! => {
"msg": "winrm or requests is not installed: No module named 'winrm'"
}**
PLAY RECAP *********************************************************************
51.144.125.149 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
We tried to fix the problem by installing various potentially relevant components in the pipeline just before running the ansible-playbook command, for instance this one
pip3 install pywinrm
Later, based on input on this SO question we tried this in the pipeline:
python3 -m pip install --ignore-installed pywinrm
find / -name winrm.py
ansible-playbook -vvv -i inventory/hosts.cfg main.yml
The find command finds winrm.py here:
/opt/pipx/venvs/ansible-core/lib/python3.8/site-packages/ansible/plugins/connection/winrm.py
The ansible-playbook configuration we are using is:
ansible-playbook [core 2.12.5]
config file = None
configured module search path =
['/home/vsts/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/pipx/venvs/ansible-
core/lib/python3.8/site-packages/ansible
ansible collection location =
/home/vsts/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/pipx_bin/ansible-playbook
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC
9.4.0]
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
The error we get is:
task path: /home/vsts/work/1/s/ansible/main.yml:1
redirecting (type: modules) ansible.builtin.setup to
ansible.windows.setup
Using module file /opt/pipx/venvs/ansible-
core/lib/python3.8/site-
packages/ansible_collections/ansible/windows/plugins/modules/
setup.ps1
Pipelining is enabled.
fatal: [13.73.148.141]: FAILED! => {
"msg": "winrm or requests is not installed: No module named
'winrm'"
}
you can try solution in RedHat knowledgebase
https://access.redhat.com/solutions/3356681
Last comment suggestion (replace yum with apt commands)
I was getting this error even if python2-winrm version 0.3.0 is
already installed via yum
yum list installed | grep winrm python2-winrm.noarch
0.3.0-1.el7 #epel
pip install "pywinrm>=0.2.2" only resulted in "Requirement already
satisfied"
I ran this to resolve the error -
yum autoremove python2-winrm.noarch
pip install "pywinrm>=0.2.2"
Then ping: pong worked just fine over https, port=5986
ram#thinkred1cartoon$ ansible all -i hosts.txt -m win_ping
172.16.96.135 | SUCCESS => {
"changed": false,
"ping": "pong" }
conversely, if you don't want to run command 1, then command 2 won't
work for you. In that case, run command 3
3 ) pip install --ignore-installed "pywinrm>=0.2.2"
I am new to saltstack and I am trying to run a job using the SaltStack Config.
I have a master and minion(Windows machine).
the init.sls file is the following:
{% set machineName = salt['pillar.get']('machineName', '') %}
C:\\Windows\\temp\\salt\\scripts:
file.recurse:
- user: Administrator
- group: Domain Admins
- file_mode: 755
- source: salt://PROJECTNAME/DNS/scripts
- makedirs: true
run-multiple-files:
cmd.run:
- cwd: C:\\Windows\\temp\\salt\\scripts
- names:
- ./dns.bat {{ machineName }}
and the dns.bat file:
Get-DnsServerResourceRecordA -ZoneName corp.local
The main idea is to create DNS record, but for now I am trying to run only this command to check things out, but I get the following info message when I run the job:
"The minion has not yet returned. Please try again later."
I went to check out in the master and ran the command: salt-run manage.status and got the following:
salt-run manage.status
/usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.23) or chardet (4.0.0) doesn't match a supported version!
RequestsDependencyWarning)
down:
- machine1
- machine2
- machine3
up:
- saltmaster
I tried some commands, to restart the machines, but still no success.
Any help would be appreciated! Thanks in advance!
I have following code in on of roles:
- name: Create user group
group: name={{ user.group }} state=present
tags:
- user
When I run ansible-playbook -i localhost --ask-become-pass playbook.yml I catch an error:
ERROR! conflicting action statements: group, tags
The error appears to have been in '/home/user/spark/roles/base/tasks/main.yml': line 12, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Create user group
^ here
...
exception type: <class 'ansible.errors.AnsibleParserError'>
exception: conflicting action statements: group, tags
...
Full error log: https://www.pastiebin.com/59cb4faf9ae35
If I remove tags it runs without error.
It was compnow it is exception: unexpected parameter type in action: after I leave only name and tags. Full error log: pastiebin.com/59cb5b2996a10letely ok all the time but now it works like this without any identifiable reason. I tried to use previous working version of playbook (through git checkout ) but error exists anyway :(
I would be grateful for any hint on this.
P.S. I have fork of ansible playbook for Archlinux machine.
P.P.S.
ansible 2.4.0.0
config file = /home/user/spark/ansible.cfg
configured module search path = [u'/home/user/spark/library/ansible-aur']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Sep 20 2017, 01:25:59) [GCC 7.2.0]
Ansible aur:
[submodule "library/ansible-aur"]
path = library/ansible-aur
url = https://github.com/pigmonkey/ansible-aur.git
ansible.cfg
[defaults]
library = ./library/ansible-aur
UPD. If I remove group keyword like this:
- name: Create user group
tags:
- user
I catch an error: unexpected parameter type in action: after I leave only name and tags. Full error log: pastiebin.com/59cb5b2996a10
That's pretty funny story.
I have vim plugin that creates ctags tags for a project. And it created file named: spark/library/ansible-aur/tags.
It wasn't python file or something. It was just common ctags file.
I found that it was the root of all troubles by using pdb debugger with ansible code.
I've found one of error strings I saw: conflicting action statements in ansible code:
> /usr/lib/python2.7/site-packages/ansible/parsing/mod_args.py(293)parse()
-> raise AnsibleParserError("conflicting action statements: %s, %s" % (action, item), obj=self._task_ds)
And injected pdb.set_trace() there.
Then I make some study:
(Pdb) item
u'tags'
(Pdb) item in module_loader
True
(Pdb) module_loader.find_plugin(item)
u'/home/user/spark/library/ansible-aur/tags'
Last line was kind of hint. After deleting that tags file everything was completely ok.
Corresponding bug report: https://github.com/ansible/ansible/issues/31070
I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
Since ansible 2.4, a callback plugin name full_skip was added to suppress the skipping of task names and skipping keyword in the ansible output. You can try the below ansible configuration:
[defaults]
stdout_callback = full_skip
Ansible allows you to control its output by using custom callbacks.
In this case you can simply use the skippy callback which will not output anything on a skipped task.
That said, skippy is now deprecated and will be removed in ansible v2.11.
If you don't mind losing colours you can elide the skipped tasks by piping the output through sed:
ansible-playbook whatever.yml | sed -nr '/^TASK/{h;n;/^skipping:/{n;b};H;x};p'
If you are using roles, you can use when to cancel the include in main.yml
# roles/myrole/tasks/main.yml
- include: somefile.yml
when: somevar is defined
# roles/myrole/tasks/somefile.yml
- name: this task will only run (and be seen in the output) if somevar is defined
debug:
msg: "Hello World"
I've split my Gruntfile into multiple files using load-grunt-tasks but seem to get an error when using ftp-deploy. I've tried some different things, but I reason that the hyphen (-) in the "ftp-deploy" might cause problems.
I'm getting the following error:
Running "ftp-deploy:theme" (ftp-deploy) task
Verifying property ftp-deploy.theme exists in config...ERROR
>> Unable to process task.
Warning: Required config property "ftp-deploy.theme" missing. Use --force to continue.
When running:
grunt "ftp-deploy:theme" --verbose
My ftp-deploy script looks as follows:
# FTP DEPLOY
# -------
module.exports =
'ftp-deploy':
theme:
auth:
host: 'host.com'
port: 21
authKey: 'key'
src: 'drupal/sites/all/themes/theme_name'
dest: 'www/host.com/sites/all/themes/theme_name'
I've tried running it without incapsulating it inside the "theme:" object which works, but is essentially not what I want to do as I have different folders I want to transfer.
Any ideas to what a solution might be?
I found the answer myself.
'ftp-deploy':
Should of course not be included in the file.