rundeck sudo: no tty present and no askpass program specified - rundeck

I am working on rundeck server. where i added remote node & try to run the script on remote node.
#!/bin/bash
cat /etc/os-release
sed -i '/#DNS=/c DNS=8.8.8.8' /etc/systemd/resolved.conf && sudo systemctl restart systemd-resolved.service
when i run this job, its stuck & after killing the job manually.
output
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Failed: Interrupted: Connection was interrupted
[sudo] password for anas:
resource.xml
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="node-1" always-set-pty="true" description="Rundeck server node" tags="" hostname="64.23.123.189" osArch="amd64" osFamily="unix" osName="Linux" osVersion="4.15.0-189-generic" sudo-command-enabled="true" sudo-command-pattern="^sudo .+? sudo .*$" sudo-password-option="option.sudoPassword" username="anas" ssh-authentication="password" ssh-password-storage-path="keys/Proxmox/88.password"/>
</project>
I also tried many other attributes like:
sudo-prompt-pattern="^.*password.*"
sudo-password-option="option.sudoPassword"
sudo-command-pattern="^sudo .+? sudo .*$"
sudo-command-enabled="true"
always-set-pty="true"
sudo2-command-enabled="true"
sudo2-command-pattern="^sudo .+? sudo .*$"
Rundeck version
Rundeck 4.7.0
Can anyone explain, what i'm missing?

I replicated your scenario and your issue. Checking your model source entry, I'm sure that you want to use an option as a password to authenticate and execute the sudo commands, let me share a node entry and job definition example:
Node definition example (tested on remote Ubuntu 22.04 and Rocky Linux 8 servers):
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="ubuntu"
description="ubuntu"
tags="prod"
hostname="192.168.56.12"
osArch="amd64"
osFamily="unix"
osName="Linux"
osVersion="5.11.0-49-generic #55-Ubuntu SMP"
always-set-pty="true"
username="vagrant"
ssh-authentication="password"
ssh-password-storage-path="keys/sudopasswd"
sudo-command-enabled="true"
sudo-command-pattern="^sudo$"
sudo-prompt-pattern="^\[sudo\] password for .+: .*"
sudo-password-option="option.sudoPassword" />
</project>
As you see, the sudo-command-pattern is different and you need to add the sudo-prompt-pattern attribute which "receives" the sudo prompt to put the password automatically.
Job Definition Example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 57262967-00f1-4e5e-b872-57ace765daee
loglevel: INFO
name: TestSUDO
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: ubuntu '
nodesSelectedByDefault: true
options:
- name: sudoPassword
required: true
secure: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: 'first test: using the command step'
exec: sudo whoami
- description: 'second test: on script step'
fileExtension: .sh
interpreterArgsQuoted: false
script: |
sudo whoami
scriptInterpreter: sudo /bin/bash
keepgoing: false
strategy: sequential
uuid: 57262967-00f1-4e5e-b872-57ace765daee
Some things to consider:
The option named in the sudo-password-option node entry (sudoPassword on the job), must be a Secure Remote Authentication Option.
The sudo attributes work flawlessly on command steps, that's a little different on script steps (next point).
On Script steps (like your use-case), it needs to define the sudo command in the Invocation String textbox (Edit your job > Go to your script step > click on the Advanced link > Invocation string), in my case, I used sudo /bin/bash with .sh as File Extension.
Check the result here.

Related

Passing Multiple Kubectl Commmands to a Pod through Ansible

I am having some trouble to pass multiple commands to a pod, running on a Rancher Machine through Ansible. Currently, I am trying to execute two kubectl commands on the same task. I am trying to change to the /tmp directory of the pod and then execute an ls. The problem is that if I try to run the commands in different tasks, I will ls(list) not the /tmp directory, but the default directory I am accessing every time I run a kubectl command. It is like every time I access the pod, with kubectl, I am running isolated tasks, not dependent on the task run before. Of course, I could simply run ls /tmp to list the /tmp directory, and I would only need one command and that would be fine, but that does not fulfill my objective with what I am trying to understand here.
I've assemble the following playbook to try to run both cd /tmp and ls on the same command. Take the following playbook as an example:
---
- hosts: localhost #group of hosts on host file
connection: local
remote_user: root
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
- name: Change to /tmp and ls
command: |
kubectl --namespace=redmine exec redmine-quick-testing-6c57cc5d65-lwkww -- /bin/bash -c "cd /tmp"
kubectl --namespace=redmine exec redmine-quick-testing-6c57cc5d65-lwkww -- /bin/bash -c "ls"
Ansible version:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
What could possibly be wrong?

Gitlab-runner failed to remove permission denied

I'm setting up a CI/CD pipeline with Gitlab. I've installed gitlab-runner on a Digital Ocean Ubuntu 18.04 droplet and gave permissions in /etc/sudoers to the gitlab-runner as:
gitlab-runner ALL=(ALL:ALL)ALL
The first commit to the associated repository correctly build the docker-compose (the app itself is Django+postgres), but following commits are not able to clean previous builds and fail:
Running with gitlab-runner 12.8.0 (1b659122)
on ubuntu-s-4vcpu-8gb-fra1-01 52WypZsE
Using Shell executor...
00:00
Running on ubuntu-s-4vcpu-8gb-fra1-01...
00:00
Fetching changes with git depth set to 50...
00:01
Reinitialized existing Git repository in /home/gitlab-runner/builds/52WypZsE/0/lorePieri/djangocicd/.git/
From https://gitlab.com/lorePieri/djangocicd
* [new ref] refs/pipelines/120533457 -> refs/pipelines/120533457
0072002..bd28ba4 develop -> origin/develop
Checking out bd28ba46 as develop...
warning: failed to remove app/staticfiles/admin/img/selector-icons.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/search.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-alert.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/tooltag-arrowright.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-unknown-alt.svg: Permission denied
This is the relevant portion of the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
stages:
- test
- deploy_staging
- deploy_production
step-test:
stage: test
before_script:
- export DYNAMIC_ENV_VAR=DEVELOP
only:
- develop
tags:
- develop
script:
- echo running tests in $DYNAMIC_ENV_VAR
- sudo apt-get install -y python-pip
- sudo pip install docker-compose
- sudo docker image prune -f
- sudo docker-compose -f docker-compose.yml build --no-cache
- sudo docker-compose -f docker-compose.yml up -d
- echo do tests now
- sudo docker-compose exec -T web python3 -m coverage run --source='.' manage.py test
...
What I've tried:
usermod -aG docker gitlab-runner
sudo service docker restart
The best solution for me was adding
pre_clone_script = "sudo chown -R gitlab-runner:gitlab-runner ."
into /etc/gitlab-runner/config.toml
Even if you won't have permissions after a previous job it'll set correct permissions before cleaning up the workdir and cloning the repo.
I would recommend setting a GIT_STRATEGY to none in the afflicted job.
I have had the exact same problem. Therefore I will explain how it was resolved in details.
Try finding your config.toml file and run the gitlab-runner command with root privileges, since permission denied is a very common UNIX-based operating systems error.
After finding the location of config.toml pass it:
sudo gitlab-runner run --config <absolute_location_of_config_toml>
P.S. You can find all config.toml file easily using locate config.toml command. Make sure you have already installed by executing sudo apt-get install mlocate
After facing to permission denied error, I have tried using sudo gitlab-runner run instead of gitlab-runner, but it has its own problem:
ERROR: Failed to load config stat /etc/gitlab-runner/config.toml: no such
file or directory builds=0
while executing gitlab-runner without root permissions doesn't have any config file problem.
Try implementing the ways and solutions as #Grumbanks and #vlad-Mazurkov mentioned. But they didn't work properly.
It MAY be because you write a file in cloned out codebase. What I do is simply create another directory outside of gitlab-runner directory:
WORKSPACE_DIR="/home/abcd_USER/a/b"
rm -rf $WORKSPACE_DIR
mkdir -p $WORKSPACE_DIR
cd $WORKSPACE_DIR
ls -la
git clone ..................
AND DO whatever
I never faced the issue again.

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

Ansible: have sudo but no root

I’d like to use Ansible to manage the configuration of a our Hadoop cluster (running Red Hat).
I have sudo access and can manually ssh into the nodes to execute commands. However, I’m experiencing problems when I try to run Ansible modules to perform the same tasks. Although I have sudo access, I can’t become root. When I try to execute Ansible scripts that require elevated privileges, I get an error like this:
Sorry, user awoolford is not allowed to execute '/bin/bash -c echo
BECOME-SUCCESS- […] /usr/bin/python
/tmp/ansible-tmp-1446662360.01-231435525506280/copy' as awoolford on
[some_hadoop_node].
Looking through the documentation, I thought that the become_allow_same_user property might resolve this, and so I added the following to ansible.cfg:
[privilege_escalation]
become_allow_same_user=yes
Unfortunately, it didn't work.
This post suggests that I need permissions to sudo /bin/sh (or some other shell). Unfortunately, that's not possible for security reasons. Here's a snippet from /etc/sudoers:
root ALL=(ALL) ALL
awoolford ALL=(ALL) ALL, !SU, !SHELLS, !RESTRICT
Can Ansible work in an environment like this? If so, what am I doing wrong?
Well, you simply cannot execute /bin/sh or /bin/bash as your /etc/sudoers shows. What you could do is change ansible's default shell to something else (variable executable in ansible.conf).
Since your sudo policy allows everything by default (does not seem like really secure to me), and I suppose ansible expects an sh-compatible shell, as a really dirty hack you could copy /bin/bash to some other path/name and set the executable variable accordingly (not tested).
In the playbook (some.yml) file, set
runthisplaybook.yml
---
- hosts: label_which_will_work_on_some_servers
sudo: yes
roles:
- some_role_i_want_to_run
Next, in the role//tasks/main.yml for the action which you have to run as sudo.. use something like become_user (where common_user is a variable defined in some role's defaults\main.yml file as common_user: "this_user_can_sudo":
- name: Run chkconfig on init script
command: "sudo -u root /sbin/chkconfig --add tomcat"
# Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
# OR Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "sudo -u firstuser sudo -u seconduser chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
# OR Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
PS: While running ansible-playbook,
ansible-playbook runthisplaybook.yml --sudo-user=this_user_can_sudo -i hosts.yml -u user_which_will_connect_from_source_machine --private-key ${DEPLOYER_KEY_FILE} --extra-vars "target_svr_type=${server_type} deploy_environment=${DEPLOY_ENVIRONMENT} ansible_user=${ANSIBLE_USER}"
After a research over the subject, as of Ansible 2.8 it doesn't seem you have a way to run commands as a different user using become without root permissions.
There's another way to achieve what you were asking without being so, how to put it, 'hacky'.
You can use the shell module with sudo su - <user> -c "COMMAND" to execute a command as a different user, without the need for root access to the original user.
For example,
1 ---
2 - hosts: target_host
3
4 tasks:
5 - shell: 'sudo su EXEC_USER -c "whoami"'
6 register: x
7
8 - debug:
9 msg: "{{ x.stdout_lines }}" # This returns EXEC_USER
However, if your play is complex, you would need to break it down and wrap only the commands that are required to be executed as different user.
This isn't best practice (using sudo + shell instead of become), however that's a solution, and in my opinion a better one than creating dummy shell on every node you manage.
I think now sudo: yes is depricated and replace with become: yes
---
- hosts: servers_on_which_you_want_to_run
become: yes
roles:
- some_role
The smiplist solution is just create a ansible.cfg in your playbook directory with the following content, if it doesn't accept root user:
[defaults]
sudo_user = UsernameToWhichYouWantToUse
Hope, this will solve your problem.

Run Python 2.7 by default in a Dotcloud custom service

I need to make Python 2.7 the default version of Python for running a Jenkins build server. I'm trying to use python_version to do this, but Python 2.6 remains the default version. I'm probably missing something really simple. Any suggestions?
dotcloud.yml
jenkins:
type: custom
buildscript: jenkins/builder
ports:
www: http
config:
python_version: v2.7
processes:
sshagent: ssh-agent /bin/bash
jenkins: ~/run
db:
type: postgresql
builder
#!/bin/bash
if [ -f ~/jenkins.war ]
then
echo 'Found jenkins installation.'
else
echo 'Installing jenkins.'
wget -O ~/jenkins.war http://mirrors.jenkins-ci.org/war/latest/jenkins.war
fi
echo 'Installing dotCloud scaffolding.'
cp -a jenkins/. ~
echo 'Setting up SSH.'
mkdir -p ~/.ssh
cp jenkins_id ~/.ssh/id_rsa
chmod 0600 ~/.ssh/id_rsa
ssh-keygen -R bitbucket.org
ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts
I'm still not sure why my build file didn't solve the problem, but I was able to work around it by using the --python=/usr/bin/python2.7 option for virtualenv in my Jenkins build script.