No write access to $HOME in tmux after logout and login - kerberos

I am not able to write to files in $HOME (on an Andrew File System) in tmux after logging out and logging in again.
(.lobster)[earth] ~/lobster >touch test
touch: setting times of `test': Permission denied
My problem seems similar to the one described here except that for me, the permissions look fine:
(.lobster)[earth] ~/lobster >ls -ld
drwxr--r-- 7 awoodard campus 2048 Mar 28 15:55 .
I've tried checking KRB5CCNAME outside of tmux and updating it to the same value inside of tmux, to no avail.
Thanks!

AFS file system implementations such as OpenAFS and AuriStorFS use AFS tokens for authentication not Kerberos tickets. AFS tokens can be obtained using Kerberos via the aklog command. When executed without parameters aklog will use the Kerberos ticket granting ticket stored in the current Kerberos credential cache to acquire an AFS token for the default workstation cell. The workstation cell can be determined using the fs wscell command.
host# fs wscell
This workstation belongs to cell 'auristor.com'
To determine if you have an AFS token for a cell use the 'tokens' command.
host# tokens
Tokens held by the Cache Manager:
Rxgk Tokens for auristor.com [Expires Apr 03 12:43]
User's (AFS ID 103) rxkad tokens for auristor.com [Expires Apr 03 12:43]
If you wish to obtain AFS tokens for a cell other than the workstation cell
host# aklog grand.central.org
Finally, you can obtain debugging output from aklog with the -d paramenter.
I hope this helps.

Related

Can Kerberos (klist) show two tickets from the same principal?

I'm attempting to write a script that checks whether my Kerberos tickets are valid or expiring soon. To do this, I use klist --json or klist to produce a list of currently active tickets (depending on version of Kerberos installed), then I parse the results with regular expressions or JSON.
The end result is that I get a list of tickets that looks like this:
Issued Expires Principal
Aug 19 16:44:51 2020 Aug 22 14:16:55 2020 krbtgt/EXAMPLE.COM#EXAMPLE.COM
Aug 20 09:05:06 2020 Aug 20 19:05:06 2020 ldap/abc-dc101.example.com#EXAMPLE.COM
Aug 20 09:32:18 2020 Aug 20 19:32:18 2020 krbtgt/DEV.EXAMPLE.COM#EXAMPLE.COM
With a little bit of work, I can parse these results and verify them. However I'm curious whether it's ever possible that Kerberos will have two tickets from the same principal. Reading the MIT page on Kerberos usage it seems like there is only ever one ticket that would be the "initial" ticket.
Can I rely on uniqueness by principal, or do I need to check for the possibility of multiple tickets from the same principal?
It's a bit more complicated than that.
TL;DR Your 2nd TGT seems related to cross-realm authentication, see below in bold.
klist shows the tickets that are present in the default system cache:
error message if there is no such cache to query (i.e. FILE cache that does not exist, KEYRING kernel service not started, etc)
possibly 1 TGT (Ticket Granting Ticket) that asserts your identity in your own realm
possibly N service tickets that assert you are entitled to contact service X on server Z (which may belong to another realm, see below)
in the case of cross-realm authentication, some intermediate tickets that allow you to convert your TGT in realm A.R to a TGT in realm R that allows you to get a service ticket in realm B.R (that would be the default, hierarchical path used with e.g. Active Directory but custom paths may be defined in /etc/krb5.conf under [capath] or sthg like that, depending on the trusts defined between realms)
But note that not all service tickets are stored in the cache -- it is legit for an app to get the TGT from the cache, get a service ticket, and keep it private in memory. That's what Java does.
And it is legit for an app (or group of apps) to use a private cache, cf. env variable KRB5CCNAME (pretty useful when you have multiple services running under the same Linux account and don't want to mix up their SPN) so you can't see their tickets with klist unless you tap this custom cache explicitly.
And it is legit for an app to not use the cache at all, and keep all its tickets private in memory. That's what Java does when provided with a custom JAAS config that mandates to authenticate with principal/keytab.

Is it possibe to have multiple kerberos tickets on same machine?

I have a use case where I need to connect to 2 different DBS using 2 different accounts. And I am using Kerberos for authentication.
Is it possible to create multiple Kerberos tickets on same machine?
kinit account1#DOMAIN.COM (first ticket)
kinit account2#DOMAIN.COM (second ticket)
Whenever I do klist, I only see most recent ticket created. It doesn't show all the tickets.
Next, I have a job that needs to first use ticket for account1 (for connection to DB1) and then use ticket for account2 (for DB2).
Is that possible? How do I tell in DB connection what ticket to use?
I'm assuming MIT Kerberos and linking to those docs.
Try klist -A to show all tickets in the ticket cache. If there is only one try switching your ccache type to DIR as described here:
DIR points to the storage location of the collection of the credential caches in FILE: format. It is most useful when dealing with multiple Kerberos realms and KDCs. For release 1.10 the directory must already exist. In post-1.10 releases the requirement is for parent directory to exist and the current process must have permissions to create the directory if it does not exist. See Collections of caches for details. New in release 1.10. The following residual forms are supported:
DIR:dirname
DIR::dirpath/filename - a single cache within the directory
Switching to a ccache of the latter type causes it to become the primary for the directory.
You do this by specifying the default ccache name as DIR:/path/to/cache on one of the ways described here.
The default credential cache name is determined by the following, in descending order of priority:
The KRB5CCNAME environment variable. For example, KRB5CCNAME=DIR:/mydir/.
The default_ccache_name profile variable in [libdefaults].
The hardcoded default, DEFCCNAME.

Does Server -dev mode store the data in windows

I tried running the vault server in local with dev mode option. I got a root token which i exported to the environment variables. But once I stopped the server and started it it said *Invalid Request, Unable to start the server with the token my token.
Also does the in-memory vault server store its secrets?
If so where does it store secrets in my windows machine? I have exported VAULT_DEV_ROOT_TOKEN_ID to my environment variables with value s.WC4LYVf6oOyllP6HjR0A3nvo
I tried restarting the server several times
C:\Users\user>vault server -dev
==> Vault server configuration:
Api Address: http://127.0.0.1:8200
Cgo: disabled
Cluster Address: https://127.0.0.1:8201
Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: false, enabled: false
Storage: inmem
Version: Vault v1.2.2
Error initializing Dev mode: failed to create root token with ID "s.WC4LYVf6oOyllP6HjR0A3nvo": 1 error occurred:
* invalid request
The problem here was when ever we start a the vault server in dev mode in windows it generates a root access token. If we export the VAULT_DEV_ROOT_TOKEN_ID to the environment variables, it tries to start the server with that token. But since the token was previously used by vault it the server is not allowed to start.
Because you are running on the dev mode you'll need to unset the token, because whenever you restart the server it generates a new token, and invalidates the previous one. This is how to get it done on MacOS, specifically Big Sur.
Do these:
Open the .zshrc file. Issue this command in the terminal open ~/.zshrc to open it.
In the previous setup, the vault url and token has been added to .zshrc, comment these lines out. These are the lines:
export VAULT_ADDR='http://127.0.0.1:8200
export VAULT_DEV_ROOT_TOKEN_ID=s.a009oZnrl78a9h1vlRw1kGqL
Issue the startup command on the terminal again - vault server -dev
This time around it'll generate a new token, copy it, paste, and replace the previous token. This is the one to edit and replace -
export VAULT_DEV_ROOT_TOKEN_ID=s.a009oZnrl78a9h1vlRw1kGqL.
Once you've added it, uncomment these vault values in the .zshrc file.
Save the file and close, then on your terminal issue this command - source ~/.zshrc to refresh/update the .zshrc file.

RabbitMQ failed to start, TCP connection succeeded but Erlang distribution failed

I'm a new one just start to learn and install RabbitMQ on Windows System.
I install Erlang VM and RabbitMQ in custom folder, not default folder (Both of them).
Then I have restarted my computer.
By the way,My Computer name is "NULL"
I cd to the RabbitMQ/sbin folder and use command:
rabbitmqctl status
But the return message is:
Status of node rabbit#NULL ...
Error: unable to perform an operation on node 'rabbit#NULL'.
Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbit#NULL
DIAGNOSTICS
attempted to contact: [rabbit#NULL]
rabbit#NULL:
connected to epmd (port 4369) on NULL
epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
TCP connection succeeded but Erlang distribution failed
Authentication failed (rejected by the remote node), please check the Erlang cookie
Current node details:
node name: rabbitmqcli70#NULL
effective user's home directory: C:\Users\Jerry Song
Erlang cookie hash: 51gvGHZpn0gIK86cfiS7vp==
I have try to RESTART RabbitMQ, What I get is:
ERROR: node with name "rabbit" already running on "NULL"
By the way,My Computer name is "NULL"
And I have enable all ports in firewall.
https://groups.google.com/forum/#!topic/rabbitmq-users/a6sqrAUX_Fg
describes the problem where there is a cookie mismatch on a fresh installation of Rabbit MQ. The easy solution on windows is to synchronize the cookies
Also described here: http://www.rabbitmq.com/clustering.html#erlang-cookie
Ensure cookies are synchronized across 1, 2 and Optionally 3 below
%HOMEDRIVE%%HOMEPATH%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie for user %USERNAME%) if both the HOMEDRIVE and HOMEPATH environment variables are set
%USERPROFILE%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie) if HOMEDRIVE and HOMEPATH are not both set
For the RabbitMQ Windows service - %USERPROFILE%\.erlang.cookie (usually C:\WINDOWS\system32\config\systemprofile)
The cookie file used by the Windows service account and the user running CLI tools must be synchronized by copying the one from C:\WINDOWS\system32\config\systemprofile folder.
If you are using dedicated drive folder locations for your development tools/software in Windows10(Not the windows default location), one way you can synchronize the erlang cookie as described by https://www.rabbitmq.com/cli.html is by copying the cookie as explained below.
Please note in my case HOMEDRIVE and HOMEPATH environment variables both are not set.
After copying the "C:\Windows\system32\config\systemprofile\.erlang.cookie" to "C:\Users\%USERNAME%\.erlang.cookie" ,
the error "tcp connection succeeded but Erlang distribution failed" is resolved.
Now I am able to use "rabbitmqctl.bat status" command successfully. Hence there is no mandatory need to install in default location to resolve this error as synchronizing cookie will resolve that error.
In my case similar issue (Authentication failed because of Erlang cookies mismatch) solved by copying .erlang.cookie file from Windows system dir - C:\Windows\system32\config\systemprofile\.erlang.cookie to %HOMEDRIVE%%HOMEPATH%\.erlang.cookie (where %HOMEDRIVE% was set to H: and %HOMEPATH% to \ respectively)
Quick setup TODO for Windows, Erlang OTP 24 and RabbitMQ 3.8.19:
Download & Install Erlang [OTP 24] (needs Admin rights) from:
https://www.erlang.org/downloads
set ERLANG_HOME (should point to install dir)
Download & Install recent [3.8.19] RabbitMQ (needs Admin rights) from:
https://github.com/rabbitmq/rabbitmq-server/releases/
Follow: https://www.rabbitmq.com/install-windows.html and/or
https://www.rabbitmq.com/install-windows-manual.html
set RABBITMQ_SERVER (should point to install dir)
update %PATH% by adding: ;%RABBITMQ_SERVER%\sbin
Fix Erlang-cookie issue from above, follow: https://www.rabbitmq.com/cli.html#erlang-cookie
Enable Web UI by running command: %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management
From item #8 (above) got error because of missing file: %USERPROFILEDIR%/AppData/Roaming/RabbitMQ/enabled_plugins -> had to create it and run %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management again!
Run/restart on the way might be required
Finally, login to: http://localhost:15672/ (guest:guest)
, or check by cURL:
curl -i -u guest:guest http://localhost:15672/api/vhosts
should receive response like:
HTTP/1.1 200 OK
cache-control: no-cache
content-length: 186
content-security-policy: script-src 'self' 'unsafe-eval' 'unsafe-inline';
object-src 'self'
content-type: application/json
date: Tue, 13 Jul 2021 11:21:12 GMT
server: Cowboy
vary: accept, accept-encoding, origin
[{"cluster_state":{"rabbit#hostname":"running"},"description":"Default virtual host","metadata":{"description":"Default virtual host","tags":[]},"name":"/","tags":[],"tracing":false}]
P.S. Some useful RabbitMQ CLI commands (copy-paste):
%RABBITMQ_SERVER%/sbin/rabbitmqctl start_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl stop_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl status
P.P.S. UPDATE: great article for this subject: https://www.journaldev.com/11655/spring-rabbitmq
I have reinstalled the RabbitMQ in my computer by using default setup folder
Then checked with the command :
rabbitmqctl status
It works now, not the problem of Erlang VM .(Means Er can install at another folder)
It will cause some problem (like this one) that I couldn't find out now if we don't use the RabbitMQ default setup require folder (C:\Program Files\RabbitMQ Server)
If anyone finds it out, I hope you can tell me why and how to fix.
How I resolved mine
It's mostly caused by cookie mismatch on a fresh installation of Rabbit MQ
follow this 2 steps
1. copy the .erlang.cookie file from C:\Windows\System32\config\systemprofile paste it into
C:\Users\["your user nameusername"] folder
2. run rabbitmq-service.bat stop and rabbitmq-service.bat start
Done it should work now when you run 'rabbitmqctl start_app' good luck.
note if you have more than one user put it in the correct user folder
In Centos.
add ip nodename pair to /etc/hosts on each node.
restart rabbitmq-server service on each slave node.
works for me.
i got error like this, i just stop my rabbitMQ with close port 25672
here syntax for linux:
kill -9 $(lsof -t -i:25672)
Error Images:
Just adding my experience if it helps others down the line.
I wrote a Powershell .ps1 script to install and configure RabbitMQ which would be used as one of the stept to provision a server with Packer.
I wrote the code on a fresh AWS W2016 Server build. It worked fine when run on the box (as administrator, from an admin PS console) but when the same code was moved over to the Packer build server, it would fall over when doing the rabbitmqctl.bat configuration steps via packer, despite both using (as far as I can tell) Administrator to run the scripts.
So this worked on the coding box:
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" add_user Username Password}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_user_tags User administrator}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_permissions -p "/" User "^User-.*" ".*" ".*"}
Invoke-Command -ScriptBlock $pathvargs
Write-Host "Did RabbitMQ"
But I had to prelude this with...
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" "C:\Program Files\RabbitMQ Server\rabbitmq_server-3.7.17\sbin\.erlang.cookie"
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" $env:userprofile\.erlang.cookie -force
... On the Packer box.
I am guessing there is some context issue going on but I'm using
"winrm_username": "Administrator",
in the Packer builders block, so I thought this would suffice.
TL;DR - Use the Cookie even though it works without it in some instances.
I have encountered the same error after installing Erlang VM and RabbitMQ using the default installation folders in Windows 10. Managed to start the management and access it via HTTP, but status failed with this error.
The cookie was fine in all folders (%HOMEDRIVE%%HOMEPATH%, %USERPROFILE%, C:\WINDOWS\system32\config\systemprofile).
I had to perform a restart the Windows to make it work. After restart it set up something to run at startup + asked permission to make an exception in the firewall.
In my case, the file was at c:\\Windows\.erlang.cookie, just copied it to C:\Users{USERNAME} and all works, thanks to everyone for the hits
Another thing to check after making sure the cookie file is in all the locations.. is to realize that you installed 32 bit Erlang.. not 64..
Happened to me. Removed 32 bit Erlang and Installed 64 and rabbitmqctl status returns expected results.

Enable SSO on redhat Environment

I need to enable SSO on my redhat environment. I need to know which rpms needs installation.
believe it’s a case of configuring AD to support the single sign-on against the WebSeal instance.i am installing WebSeal 6.1(Tivoli Access Manager WebSeal 6.1).
I have no knowledge regarding this.Can anyone brief me out and help me here how to proceed and what steps should be taken. What should be the prerequisites ?
There is a good writeup on IBM's InfoCenter about how to do this:
TAM 6.0:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.itame.doc_6.0/rev/am60_webseal_admin211.htm?path=5_8_1_6_0_6_0_2_1_10_1_2#spnego-cfg-unix
TAM 6.1.1:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.itame.doc_6.1.1/am611_webseal_admin709.htm?path=5_8_1_3_1_11_1_2#spnego-cfg-unix
SAM 7.0:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.isam.doc_70/ameb_webseal_guide/concept/con_config_win_desktop_sso_unix.html
You have to:
Install IBM Kerberos client for WebSEAL
Create an entry in AD for the Linux server to auth against
Map the Kerberos principal to that AD user (the hardest part)
Enable SPNEGO on WebSEAL
Here are some of my notes that may help. However, I would strongly recommend walking down through the instructions on the InfoCenter site, as they are almost right on.
For step 1, in the linux_i386 directory, install the IBM Kerberos client using:
rpm -i IBMkrb5-client-1.4.0.2-1.i386.rpm
For step 2, the ktpass command you run on your AD controller should look something like:
ktpass -princ HTTP/WEBSEAL_SERVER_NAME_NOTFQDN#ad-domain.org -pass new_password -mapuser WEBSEAL_SERVER_NAME_NOTFQDN -out c:\WEBSEAL_SERVER_NAME_NOTFQD_HTTP.keytab -mapOp set
Transfer that keytab file to your Linux server.
Also make sure the keytab file on the Linux server is chown ivmgr.ivmgr; chmod 600. Otherwise the WebSEAL process won't be able to read it.
For step 3, you will need to edit /etc/krb5/krb5.conf and configure the KDC, AD realm, and local DNS name. You can use the mkkrb5clnt utility to help with this:
config.krb5 -r AD-DOMAIN.ORG -c ad-domain.org -s ad-domain.org -d AD-DOMAIN
Edit krb5.conf and change:
[libdefaults]
default_tkt_enctypes = des-cbc-md5 des-cbc-crc
default_tgs_enctypes = des-cbc-md5 des-cbc-crc
From my notes, I had you can test the Kerberos configuration using (this is all documented on the infocenter article):
/usr/krb5/bin/kinit webseal#AD-DOMAIN.ORG
Enter the password for the WebSEAL user, then use klist to check things.
For step 4, just edit the WebSEAL config file and change:
[spnego]
spnego-auth = https
[authentication-mechanisms]
kerberosv5 = /opt/PolicyDirector/lib/libstliauthn.so
If you are clients are configured correctly, as long as their AD account name matches their TAM account name then it will work. You can also have WebSEAL prepend the #DOMAIN.ORG when mapping to a TAM user, which is handy if you are going to have multiple domains setup for SSO. However, you have to have TAM accounts with user#domain.org within your directory to map to.
You can specify what auth level SPNEGO comes in by modifying the [authentication-levels] section in the WebSEAL config file. That level would be level = kerberosv5
Good luck and have patience. Getting the Kerberos client setup on the Linux box was the most difficult part. It's a bit tricky when it wants capital DNS domain name, lower case DNS domain name, or just the plain vanilla AD domain name.