Rundeck user fails to switch user - rundeck

Using rundeck. I created a node with the following settings:
<attribute name="sudo-command-enabled" value="true"/>
<attribute name="sudo-prompt-pattern" value=""/>
<attribute name="sudo-command-pattern" value="^sudo.*"/>
I have added the key to authorized keys and made sure no password was required on the server to switch to "myuser".
When I run the command to switch to "myuser" as first step in my workflow, I get the following message/output:
Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
Failed: NonZeroResultCode: Remote command failed with exit status -1
myuser#<server_ip_address_here>:~$
The command run step looks like this:
sudo su - myuser
It's worth noting that manually SSH-ing into the server and typing above command yields no error whatsoever.
As can be seen in the last line, it DOES succesfully switch to "myuser". However, since the exist status is -1 it will not run any other commands following this.
What do I need to do or configure in order for my node to stop asking for the password and thus failing, especially considering the switch is apparently succesfull?

I did the same (passwordless sudo configuration) and works in this way.
Node definition:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="node00"
description="Node 00"
tags="user"
hostname="192.168.33.20"
osArch="amd64"
osFamily="unix"
osName="Linux"
osVersion="3.10.0-1062.4.1.el7.x86_64"
username="vagrant"
ssh-key-storage-path="keys/rundeck"
sudo-command-enabled="true"
sudo-command-pattern="^\[sudo\] password for .+: .*" />
</project>
And now you can go to the "commands" section, select the target node and switch to root user with:
sudo whoami
Now, any command with sudo statment is elevated.

Related

Rundeck 4.0.0 - Remote node command execution using ssh

I am having an issue with the most basic of Rundeck functions - namely, running a command over ssh on a remote node. I have generated a rsa key and added it via the Key Storage function. I have also created a yaml file for node definitions:
root#rundeck:/var/lib/rundeck# cat nodes.yml
mynode:
nodename: mynode
hostname: mynode
description: 'Some description'
ssh-authentication: privateKey # added - unsure if really required
ssh-keypath: /var/lib/rundeck/.ssh/id_rsa # added - unsure if really required
username: rundeck
osFamily: linux
The node is showing up correctly and command line ssh works just fine:
root#rundeck:/var/lib/rundeck/.ssh# ssh -i id_rsa rundeck#mynode date
Mon Apr 4 16:19:33 UTC 2022
The project settings are as below:
#Mon Apr 04 16:23:36 UTC 2022
#edit below
project.description=someproject
project.disable.executions=false
project.disable.schedule=false
project.execution.history.cleanup.batch=500
project.execution.history.cleanup.enabled=false
project.execution.history.cleanup.retention.days=60
project.execution.history.cleanup.retention.minimum=50
project.execution.history.cleanup.schedule=0 0 0 1/1 * ? *
project.jobs.gui.groupExpandLevel=1
project.label=somelabel
project.name=someproject
project.nodeCache.enabled=true
project.nodeCache.firstLoadSynch=true
project.output.allowUnsanitized=false
project.ssh-authentication=privateKey
project.ssh-command-timeout=0
project.ssh-connect-timeout=0
project.ssh-key-storage-path=keys/project/someproject/rundeck_id_rsa
resources.source.1.config.file=/var/lib/rundeck/nodes.yml
resources.source.1.config.format=resourceyaml
resources.source.1.config.requireFileExists=true
resources.source.1.config.writeable=true
resources.source.1.type=file
service.FileCopier.default.provider=jsch-scp
service.NodeExecutor.default.provider=jsch-ssh
Yet, when I try and run a Command from the UI, it fails:
Failed: SSHProtocolFailure: invalid privatekey: [B#7d7d0b2d
What am I doing incorrectly, and how do I successfully run a command over ssh on a remote node?
Your node definition needs the ssh-key-storage-path attribute pointing to the Rundeck user private key (created before on Rundeck Key Storage), also, the osFamily attribute must be set as unix (not linux, Rundeck only admits two values there: unix and windows).
To add an SSH node follow these steps:
If you're using a WAR-based installation execute: ssh-keygen -t rsa -b 4096. That generates two keys (private and public) on the user .ssh directory (the user that launches Rundeck). If you're using an RPM/DEB installation these keys are already created on the /var/lib/rundeck path.
Go to the remote SSH node (the account that you want to connect from Rundeck), then add the Rundeck server user public key to the authorized_keys file. Then you can test that connection with ssh user#xxx.xxx.xxx.xxx from the Rundeck server user account.
Launch Rundeck and then add to the Rundeck keys storage the rundeck user private key (remember to include the first and the last line "-----BEGIN RSA PRIVATE KEY-----" and "-----END RSA PRIVATE KEY-----") in my case I use this path keys/rundeck.
Create a new Project and then create the resources.xml file with remote node information. To generate that file just go to Project Settings > Edit Nodes > Click on the "Configure Nodes" button > Click on "Add Sources +" > Select "+ File" option > in the "Format" field select resourcexml and fill the path in "File Path" field (put the file name at the end, usually "resources.xml", also, select "Generate", "Include Server Node" and "Writeable" checkboxes and click on the "Save" button.
Now you can edit that file including the remote node, which in my case is "node00" (a Vagrant test image), on the key-storage-path attribute I used the same path created in step 3:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="hyperion" description="Rundeck server node" tags="" hostname="hyperion" osArch="amd64" osFamily="unix" osName="Linux" osVersion="4.15.0-66-generic" username="ruser"/>
<node name="node00" description="Node 00" tags="" hostname="192.168.33.20" osArch="amd64" osFamily="unix" osName="Linux" osVersion="3.10.0-1062.4.1.el7.x86_64" username="vagrant" ssh-key-storage-path="keys/rundeck"/>
</project>
On Rundeck GUI go to the sidebar and check your nodes on the "Nodes" section. Check.
Go to "Commands" (sidebar) and put the SSH remote node name as a filter and launch any command like this.
You can follow an entire guide here.
Alternatively, you can re-generate the key pairs with the following command: ssh-keygen -p -f /var/lib/rundeck/.ssh/id_rsa -m pem.
The keystorage save the private-key with crlf and this was the issue I recognize with version 4.2.1.
Do a dirty fix for ssh-exec.sh:
echo "$RD_CONFIG_SSH_KEY_STORAGE_PATH" > "$SSH_KEY_STORAGE_PATH"
insert these lines:
sed -i 's/\r$//' "$SSH_KEY_STORAGE_PATH"
SSHOPTS="$SSHOPTS -i $SSH_KEY_STORAGE_PATH"

Rundeck works fime on script but nothing happen on remote server

I'm newbie on rundeck , and i design a job to execute script on remote machine. In UI i get the correct log and everything is fine and shows the service PID , but nothing happen and there is no proccess with that pid on remote host.
Does Anybody has same experience?
I am working on activemq to running on remote host.
Make sure that the job is dispatching to the remote node, follow these steps to configure and test it:
Go to the remote SSH node (using the account that you want to connect from Rundeck), add the Rundeck server user public key to the authorized_keys file.
On the Rundeck side, add to key store the Rundeck server user private key (remember to include the first and the last line "-----BEGIN RSA PRIVATE KEY-----" and "-----END RSA PRIVATE KEY-----").
Create a new Project and then create the resources.xml file with remote node information. To generate that file just go to Project Settings > Edit Nodes > Click on "Configure Nodes" button > Click on "Add Sources +" > Select "+ File" option > in "Format" field select resourcexml and fill the path in "File Path" field (put the file name at the end, usually "resources.xml", also, select "Generate", "Include Server Node" and "Writeable" checkboxes and click on the "Save" button.
Now you can edit that file including the remote node, in my case is "node00" (a Vagrant test image), on key-storage-path attribute i used the same path created in the step 2:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="hyperion" description="Rundeck server node" tags="" hostname="hyperion" osArch="amd64" osFamily="unix" osName="Linux" osVersion="4.15.0-66-generic" username="ruser"/>
<node name="node00" description="Node 00" tags="" hostname="192.168.33.20" osArch="amd64" osFamily="unix" osName="Linux" osVersion="3.10.0-1062.4.1.el7.x86_64" username="vagrant" ssh-key-storage-path="keys/rundeck"/>
</project>
On Rundeck GUI go to the sidebar and check your nodes on the "Nodes" section.
Go to "Commands" (sidebar) and put the SSH remote node name as a filter and launch any command against the remote node.

Rundeck not able connect the domain servers

I am running Rundeck 3.0.20-20190408 in Red Hat Enterprise Linux 8 and I am connecting the node using pem key.
resource.xml
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="10.10.101.100" description="MyServer" tags="" hostname="10.10.101.100" osArch="amd64" osFamily="unix" osName="Linux" osVersion="3.10.0-693.2.2.el7.x86_64" username="rundeckuser" ssh-keypath="/home/username#domain.com/rundeckuser.pem" />
</project>
Getting below error while executing the commands
Execution failed: 3138 in project test_default: [Workflow result: ,
step failures: {1=Dispatch failed on 1 nodes: [10.10.101.100:
ConfigurationFailure: SSH Keyfile does not exist:
/home/username#domain.com/rundeckuser.pem +
{dataContext=MultiDataContextImpl(map={}, base=null)} ]}, Node
failures: {10.10.101.100=[ConfigurationFailure: SSH Keyfile does not
exist: /home/username#domain.com/rundeckuser.pem +
{dataContext=MultiDataContextImpl(map={}, base=null)} ]}, status:
failed]
ssh-keypath must be defined with a local path reachable by rundeck user (or the user that launches Rundeck if you're using a WAR based installation), like this example.
But the best approach is to use the Rundeck's key storage (adding the rundeck server public key to your target servers to access it, follow this if you like). Also, you can see the best practices here (it's using Rundeck 2.11 but is the same principle for all versions.

How configure job execution for sudo users with NOPASSWD option?

I try run command from another user remotely from rundeck jobs.
Rundeck provide sudo login\passwod mechanism for escalate privilege.
On my remote server in sudoers file NOPASSWD option. How i can configure rundeck jobs in this case?
my node configurations:
<project>
<node name="testServer"
type="Node"
description="testNode"
hostname="IP_ADDRESS"
username="${option.Login}"
ssh-authentication="password"
sudo-command-enabled="true"
ssh-password-option="option.Password"
/>
</project>
Example command in job for become another user:
sudo /bin/su suuser -
Error's after trying executions:
Remote command failed with exit status -1
08:57:13 Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
08:57:13 Failed: NonZeroResultCode: Remote command failed with exit status -1
I added the following three attributes to my node and it appears to work:
sudo-command-enabled="true"
sudo-prompt-pattern=""
sudo-command-pattern="^sudo.*"
You have to set the below attributes in Edit Project Configuration File
project.ssh-authentication=password
project.ssh-keypath=/home/rundeck/.ssh/id_rsa
project.ssh-password-storage-path=keys/xyz.password
project.sudo-command-enabled=true
project.sudo-password-option=option.jobPassword
project.sudo-password-storage-path=keys/xyz.password
project.sudo-prompt-pattern=^Password\:.*

Create Virtual Machine using libvirt error related to AppArmor

I am trying to create a virtual machine using libvirt using the command:
virsh create file
Contents of "file":
<domain type='qemu' id='3'>
<name>testvm</name>
<memory>100</memory>
<vcpu>1</vcpu>
<os>
<type arch='i686'>hvm</type>
</os>
<devices>
<disk type='file' device='disk'>
<source file='/libtmp/VM-linux.0.2.img'/>
<target dev='hdc'/>
</disk>
</devices>
<on_reboot>restart</on_reboot>
<on_poweroff>preserve</on_poweroff>
<on_crash>restart</on_crash>
</domain>
Here is error which occur.
error: Failed to create domain from file
error: internal error cannot load AppArmor profile 'libvirt-9cb01efc-ed3b-ff8e-4de5-7227d311dd15'
I am able to create the vm without loading the image file.
Everytime the profile name keeps on changing. I tried stopping it and creating the vm but I got the same error.
Any pointers will be very helpful.
I had the same problem and the reason was that I had a bad idea to place readonly cdrom image to /etc like this:
<disk type="file" device="cdrom">
<driver name='qemu' type="raw" />
<source file="/etc/libvirt/qemu/cdrom.iso" />
<target dev='hdb' bus='virtio'/>
<readonly/>
</disk>
Moving to /var removed the error message and allowed to start the virtual machine. This line:
<source file="/var/lib/libvirt/images/cdrom.iso" />
This is a bug in libvirt. See https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/665531
Edit the xml definition of the virtual domain with "virsh edit domainname" command. Replace type='host_device' with type='raw' in the xml definition.
This is a work around but not the correct way. Set AppArmor to complain mode using following command:
sudo aa-complain /usr/sbin/libvirtd