Issues while creating rundeck windows node and using key authentication process - rundeck

I have followed http://www.techpaste.com/2015/08/rundeck-windows-nodes-configuration/ and created windows node but while I am running a job its saying "Password is not set" but when I am providing password as an option named winrmPassword then its working.
I have written :
name="win_node" connectionType="WINRM_NATIVE" node-executor="overthere-winrm" winrm-password-option="winrmPassword"
winrm-protocol="http" winrm-auth-type="basic" username="winrmuser"
hostname="ec2-54-213-198-191.us-west-2.compute.amazonaws.com"
and giving winrmPassword as option. It is working.
So
1. how do I run a job in multiple node if each password is different?
2. How can I use keys for windows authentication? Can anyone share resource.xml file for the same?

I have resolved the above issue:
1. We must use key based auth if we want to run same cmd in multiple windows nodes at a time.
2. For key based auth in windows ;
i. At first follow all steps from http://www.techpaste.com/2015/08/rundeck-windows-nodes-configuration/ and configure openSSH asd winrm for all windows nodes. Make sure firewall rules are properly set for winrm in windows node.
ii. Follow http://www.techpaste.com/2015/06/windows-ssh-server-setup-and-configuration/ and make sure we can run commands to target windows node from rundeck server without giving password, only though keys. Give permission to read the private key you can give permission using chmod command.
iii. update resources.xml file :
<node name="node_name" username="winrmuser"
hostname="hostname_for_windows_node" ssh-keypath="full_path_to_private_key"
ssh-authentication="privateKey"
ssh-key-passphrase-option="option.sshKeyPassphrase"/>
You are done!! You can run any commands through rundeck to target node as winrmuser has admin access.

Related

How do I automatically load powershell profiles with Jenkins pipeline when running Jenkins as a service?

First off, I didn't have this issue until setting up my agent to run as a windows service.
My company has custom cmdlets we have built that are part of the default profile that is loaded when running powershell. I am using Jenkins to execute a batchfile that iterates a command over a series of machines. After settings up Jenkins to be a service, it no longer has access to those cmdlets leading me to believe the profile isn't being loaded. If I load the profile manually by running the profile script, it only seems to work on the first machine.
When setting up Jenkins as a service, I configured it to be the same user that I would manually run these scripts as if I were to login to the computer. I have verified it is using the proper user with $env:UserName.
I am at a loss as to why setting up jenkins as a windows service broke this. I could revert to using the command line to connect to Jenkins, but that doesn't always connect post server maintenance or after a power outage.
Did I configure something wrong or is there a way to load profiles instead of jenkins always running -NoProfile?
Update - I noticed when running $PROFILE it was set to a default profile location that did not exist. It seems when opening powershell manually on the machine it loads the AllUsersCurrentHost profile but this doesn't happen when using powershell from Jenkins when running as a service. I created the file location where it said it was using the profile and copied the default profile there and it works. I am still not sure why the behavior differs, but at least I found a solution.

how can I set (add) a label to a jenkins node from itself (powershell windows 2017)

how can I set (add) a label to a jenkins node from itself (powershell windows 2017) ?
I have no access to jenkins admin mode from https GUI. (that should be the standard procedure to do so).
I have both :
access to the node I need to use as remote powershell console (administrator)
access to master jenkins ssh linux account (root)
here I would like to add another label to the client node from one of thoses CLI's.
Jenkins officials neither googling permit me to find a procedure to do so.
how can I do that ? (So I can build a script around that after to make it all).
As an alternative method, I had to change it in its own xml file
as
<label>label addmynewlabel</label>
then restarted jenkinsclient (useless ?)
& restart jenkins service on jenkins main server

Kerberos: kinit on Windows 8.1 leads to empty ticket cache

I installed Kerberos for Windows on a new set-up Windows 8.1 machine.
Domain: not set
Workgroup: WORKGROUP
I edited the krb5.ini file in C:\ProgramData\MIT\Kerberos5 directory like this:
[libdefaults]
default_realm = HSHADOOPCLUSTER.DE
[realms]
HSHADOOPCLUSTER.DE = {
admin_server = had-job.server.de
kdc = had-job.server.de
}
After a restart, I made a kinit -kt daniel.keytab daniel to authenticate me against the Realm via console. Also getting a ticket by user and password via the Kerberos Ticket Manager seems to work fine, as the ticket is shown in the UI.
What I'm wondering about is, that when I call a klist I get an empty list back, which says something like cached tickets: 0:
This seems not normal to me, as my Ubuntu computer shows valid tickets by klist after a kinit.
What am I doing wrong? Is there some more configuration to do? Sometimes I read about a ksetup tool, but I don't know which settings here are neccessary and which not...
============================================================
After I set
[libdefaults]
...
default_ccache_name = FILE:C:/ProgramData/Kerberos/krb5cc_%{uid}
in my krb5.conf, the kinit command via console and via Kerberos Ticket Manager creates a file in the specified path. So far everything looks good.
But: The kinit command creates tickets with very different file names (long vs. short), depending if I run the console as "admin" (short name) or not (long name), see the screenshot below. The Kerberos Ticket Manager only shows one of the tickets:
If run as admin:
Shows the ticket I created via admin console
Creates ticket files with short file names
If run as normal:
Shows the ticket I created via "normal" console
Creates ticket files with long file names
The klist command still doesn't show the cached tickets, independent if console was opened as admin or not.
The MIT Kerberos documentation states that...
There are several kinds of credentials cache supported in the MIT
Kerberos library. Not all are supported on every platform ...
FILE caches are the simplest and most portable. A simple flat file format is used to store one credential after another. This is the
default...
API is only implemented on Windows. It communicates with a server process that holds the credentials in memory... The default
credential cache name is determined by ...
The KRB5CCNAMEenvironment variable...
The default_ccache_name profile variable in [libdefaults]
The hardcoded default, DEFCCNAME
But AFAIK, on Windows the hard-coded default cache is API: and that's what you can manage with the UI. kinit also uses that protocol by default.
I personally never could use klist to use that protocol, even with the "standard" syntax i.e. either
  klist -c API:
or
  set KRB5CCNAME=API:
  klist
On the other hand, if you point KRB5CCNAME to a FILE:***** then you can kinit then klist the ticket; but it will not show in the UI and will not be available to web browsers and the like.
If klist command doesn't show the keys even after setting environment variable like KRB5CCNAME (i.e. set KRB5CCNAME=C:\kerberos_cache\cache\krb5cache, its a file not a directory. You'll have to create parent directory manually), then chances are that the klist command that you're running is not from MIT Kerberos Windows installation in C:\Program Files\MIT\Kerberos\bin but rather the klist command from Windows itself in C:\Windows\system32.
You can check that out by running which klist if you have cygwin tools. In this case, simplest solution is to copy klist.exe into MIT Kerberos installation's bin directory as a new file i.e. klist_mit.exe. Cache entries should be shown if you run klist_mit command.

How can I setup a cell and collective in Bluemix

I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname

Automating gsutil commands

I'm trying to automate some gsutils commands, but struggling to see where the authentication files are kept and how to re-use (if thats what happens).
I've gone through the gcloud init process in bash...
curl https://sdk.cloud.google.com | bash
gcloud init
All works well when I run
'gsutil ls'
Now I'm trying to automate the process, so this would work on a new server adding into a crontab on it (rather than creating a new config each time).
I saw a mention of setting env variable GOOGLE_APPLICATION_CREDENTIALS, so I copied my credentials from web login to a file and tried it, eg trying as a different user to test
export GOOGLE_APPLICATION_CREDENTIALS=/home/user/.gsutil/mycreds
and then gsutil ls, but fails.
So I assume I've got the whole credentials thing a bit wrong. I'm assuming there is a file somewhere that was originally created by gcloud which I could use, but I can't see it anywhere ?
I've looked at the answer here but doesn't seem up to date now, as per last comment.
Edit: I have followed Zacharys steps, gcloud auth activate-service-account --key-file=myfilelocation
However, with 'gsutil ls' I now get..
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
So my next question would be, where is it looking for the project id ? If I run gsutil config, it seems to create a new set of auth which then creates another error, so have removed that.
You should be able to do this without diving in too deep to the implementation of authentication for gsutil.
If you're using standalone gsutil (if you installed via this method), the instructions in the linked question are still valid (as Travis points out).
If you'd like to continue using the gsutil supplied via the Cloud SDK, you should use service accounts. Service accounts are the preferred method of authenticating on headless machines or in non-interactive contexts.
Your flow would look something like the following:
Create a service account via the Google Cloud Developers Console.
On the remote machine, install the Cloud SDK and gsutil. If you're not installing interactively, it's better to skip the curl ... | bash method. Instead, download this install archive, extract it, and run the install.sh script. This script has options (visible with --help); if you specify choices to all of these options, it won't prompt you.
Copy the service account to the remote machine. Run gcloud auth activate-service-account --key-file=/path/to/service-account.json.
Run gsutil. You should be appropriately authenticated.
You have to set default project and user in gsutil. Run the following command:
gcloud init
Choose 1. It shows you different users; select the user and then select the project.
I was trying to create a bucket with project id as name:
$ gsutil mb -l eu gs://PROJECT-ID
Creating gs://root****/...
Error: You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
Steps that resolved for me:
gcloud auth login
gcloud config set project <PROJECT-ID>
gsutil mb -l eu gs://<PROJECT-ID>
Creating gs://root***/...
The error is gone out of the way and it works as expected.