How configure job execution for sudo users with NOPASSWD option? - rundeck

I try run command from another user remotely from rundeck jobs.
Rundeck provide sudo login\passwod mechanism for escalate privilege.
On my remote server in sudoers file NOPASSWD option. How i can configure rundeck jobs in this case?
my node configurations:
<project>
<node name="testServer"
type="Node"
description="testNode"
hostname="IP_ADDRESS"
username="${option.Login}"
ssh-authentication="password"
sudo-command-enabled="true"
ssh-password-option="option.Password"
/>
</project>
Example command in job for become another user:
sudo /bin/su suuser -
Error's after trying executions:
Remote command failed with exit status -1
08:57:13 Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
08:57:13 Failed: NonZeroResultCode: Remote command failed with exit status -1

I added the following three attributes to my node and it appears to work:
sudo-command-enabled="true"
sudo-prompt-pattern=""
sudo-command-pattern="^sudo.*"

You have to set the below attributes in Edit Project Configuration File
project.ssh-authentication=password
project.ssh-keypath=/home/rundeck/.ssh/id_rsa
project.ssh-password-storage-path=keys/xyz.password
project.sudo-command-enabled=true
project.sudo-password-option=option.jobPassword
project.sudo-password-storage-path=keys/xyz.password
project.sudo-prompt-pattern=^Password\:.*

Related

Failed to parse remote port from server output

I want to connect to jupyterhub server of my organization with vscode's Remote SSH, however it crashed with this report:
[17:48:27.180] Running script with connection command: ssh -T -D 56752 "jupyterhub.whatever.com" bash
[17:48:27.183] Terminal shell path: C:\WINDOWS\System32\cmd.exe
[17:48:27.575] > 系统无法执行指定的程序。
> ]0;C:\WINDOWS\System32\cmd.exe
[17:48:27.576] Got some output, clearing connection timeout
[17:48:28.857] "install" terminal command done
[17:48:28.858] Install terminal quit with output: ]0;C:\WINDOWS\System32\cmd.exe
[17:48:28.858] Received install output: ]0;C:\WINDOWS\System32\cmd.exe
[17:48:28.858] Failed to parse remote port from server output
[17:48:28.859] Resolver error: Error:
at g.Create (c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:585222)
at t.handleInstallOutput (c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:583874)
at t.tryInstall (c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:681023)
at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
at async c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:643908
at async t.withShowDetailsEvent (c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:647224)
at async t.resolve (c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:644958)
at async c:\Users\k6789\.vscode\extensions\ms-vscode-remote.remote-ssh-0.90.1\out\extension.js:1:726917
[17:48:28.863] ------
And this is my SSH config:
Host jupyterhub.whatever.com
HostName jupyterhub.whatever.com
IdentityFile ~/.ssh/id_rsa
PreferredAuthentications publickey
User MyUserName
I think this question can't help me, so how to solve the problem?
Try deleting the content in your C:/Users/user/.ssh/known_hosts and connect against the same host.
I had a similar problem and it's looks like Visual Studio creates a new config file.
I had the same issue with VS Code. I was able to resolve it by using SSH from a terminal and running the following on the host:
rm -f .vscode-server/*.log

vscode remote-ssg : server status check failed - waiting and retrying

This case that i can't connect to the remote because of "server status check failed - waiting and retrying" have happened several times.
However, when i delete the directory "data" and the file which has the suffix with '.log','.pid' or '.token' in remote server under the direcotory ".vscode-server" , this problem should be solved.[1]
[1]: https://i.stack.imgur.com/pwEwf.png
on your remote server side, check vscode-server daemon process is not quit from last connect, kill them all and retry
$ ps aux | grep vscode-server
$ kill -2 pid
I tried rebooting the remote machine and it worked.

Rundeck user fails to switch user

Using rundeck. I created a node with the following settings:
<attribute name="sudo-command-enabled" value="true"/>
<attribute name="sudo-prompt-pattern" value=""/>
<attribute name="sudo-command-pattern" value="^sudo.*"/>
I have added the key to authorized keys and made sure no password was required on the server to switch to "myuser".
When I run the command to switch to "myuser" as first step in my workflow, I get the following message/output:
Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
Failed: NonZeroResultCode: Remote command failed with exit status -1
myuser#<server_ip_address_here>:~$
The command run step looks like this:
sudo su - myuser
It's worth noting that manually SSH-ing into the server and typing above command yields no error whatsoever.
As can be seen in the last line, it DOES succesfully switch to "myuser". However, since the exist status is -1 it will not run any other commands following this.
What do I need to do or configure in order for my node to stop asking for the password and thus failing, especially considering the switch is apparently succesfull?
I did the same (passwordless sudo configuration) and works in this way.
Node definition:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="node00"
description="Node 00"
tags="user"
hostname="192.168.33.20"
osArch="amd64"
osFamily="unix"
osName="Linux"
osVersion="3.10.0-1062.4.1.el7.x86_64"
username="vagrant"
ssh-key-storage-path="keys/rundeck"
sudo-command-enabled="true"
sudo-command-pattern="^\[sudo\] password for .+: .*" />
</project>
And now you can go to the "commands" section, select the target node and switch to root user with:
sudo whoami
Now, any command with sudo statment is elevated.

Rundeck not able connect the domain servers

I am running Rundeck 3.0.20-20190408 in Red Hat Enterprise Linux 8 and I am connecting the node using pem key.
resource.xml
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="10.10.101.100" description="MyServer" tags="" hostname="10.10.101.100" osArch="amd64" osFamily="unix" osName="Linux" osVersion="3.10.0-693.2.2.el7.x86_64" username="rundeckuser" ssh-keypath="/home/username#domain.com/rundeckuser.pem" />
</project>
Getting below error while executing the commands
Execution failed: 3138 in project test_default: [Workflow result: ,
step failures: {1=Dispatch failed on 1 nodes: [10.10.101.100:
ConfigurationFailure: SSH Keyfile does not exist:
/home/username#domain.com/rundeckuser.pem +
{dataContext=MultiDataContextImpl(map={}, base=null)} ]}, Node
failures: {10.10.101.100=[ConfigurationFailure: SSH Keyfile does not
exist: /home/username#domain.com/rundeckuser.pem +
{dataContext=MultiDataContextImpl(map={}, base=null)} ]}, status:
failed]
ssh-keypath must be defined with a local path reachable by rundeck user (or the user that launches Rundeck if you're using a WAR based installation), like this example.
But the best approach is to use the Rundeck's key storage (adding the rundeck server public key to your target servers to access it, follow this if you like). Also, you can see the best practices here (it's using Rundeck 2.11 but is the same principle for all versions.

Chef : Opsworks : run rake task

My objective is to execute a rake task on my apps running in Opsworks.
It appears to me that my opsworks cookbook is not running rake from the correct directory.
How can I tell the cookbook to run in the app home dir (so it can pick up the Gemfile)?
Do I need to specify an RAILS_ENV?
My cookbooks default.rb:
Chef::Log.info("****** Audit Photo URLS : Running Rake Task ******")
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
end
Errors from Opsworks log:
[2014-11-28T18:36:33+00:00] INFO: Running queued delayed notifications before re-raising exception
[2014-11-28T18:36:33+00:00] ERROR: Running exception handlers
[2014-11-28T18:36:33+00:00] ERROR: Exception handlers complete
[2014-11-28T18:36:33+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2014-11-28T18:36:33+00:00] ERROR: execute[rake audit:audi_image_urls] (auditphoto::default line 3) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '10'
---- Begin output of bundle exec rake audit:audi_image_urls ----
STDOUT: Could not locate Gemfile
STDERR:
---- End output of bundle exec rake audit:audi_image_urls ----
Ran bundle exec rake audit:audi_image_urls returned 10
[2014-11-28T18:36:33+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The execute resource can take a cwd attribute for the working directory from which the command is run.
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd '/over/there'
environment 'RAILS_ENV' => 'production'
end
OpsWorks Deploy events and Execute Recipes commands
Chef 11
OpsWorks populates node[:deploy]['appshortname'] for Deploy events and Execute Recipes stack commands to house each applications configuration. With this data you could use:
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd node[:deploy]['appshortname'][:deploy_to]
user node[:deploy]['appshortname'][:user]
group node[:deploy]['appshortname'][:group]
environment( { 'RAILS_ENV' => node[:deploy]['appshortname'][:rails_env] } )
end
You may want to source :environment_variables for the environment if you have anything related configured there.
Chef 12
From the AWS stack settings docco
In Chef 12 Linux, stack settings are available as Chef data bags and are accessed only through Chef search. Data bags are stored on AWS OpsWorks Stacks instances in a set of JSON files in the /var/chef/runs/run-ID/data_bags directory, where run-ID is a unique ID that AWS OpsWorks Stacks assigns to each Chef run on an instance. Stack settings are no longer available as Chef attributes, so stack settings can no longer be accessed through the Chef node object. For more information, see the AWS OpsWorks Stacks Data Bag Reference.
app = search("aws_opsworks_app").first
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd app['app_source']['deploy_to']
user app['app_source']['user']
group app['app_source']['group']
environment( { 'RAILS_ENV' => app['app_source']['rails_env'] } )
end
Other events and commands
It looks like OpsWorks runs a little differently to a normal Chef server and supplies it's own JSON blob to a local chef instance for each run which means (as you mentioned) the :deploy attributes will be missing for other events/commands Amazon chooses not to supply JSON for.
It might be possible, but very hacky and prone to breakage, to populate the :deploy attributes from the last JSON file that contains deploy state: {"deploy": { "app_name": { "application": "app_name" } in /var/lib/aws/opsworks/chef
You would also need to source the deploy::default attributes after that JSON load to fill in any defaults.