Can Kerberos (klist) show two tickets from the same principal? - kerberos

I'm attempting to write a script that checks whether my Kerberos tickets are valid or expiring soon. To do this, I use klist --json or klist to produce a list of currently active tickets (depending on version of Kerberos installed), then I parse the results with regular expressions or JSON.
The end result is that I get a list of tickets that looks like this:
Issued Expires Principal
Aug 19 16:44:51 2020 Aug 22 14:16:55 2020 krbtgt/EXAMPLE.COM#EXAMPLE.COM
Aug 20 09:05:06 2020 Aug 20 19:05:06 2020 ldap/abc-dc101.example.com#EXAMPLE.COM
Aug 20 09:32:18 2020 Aug 20 19:32:18 2020 krbtgt/DEV.EXAMPLE.COM#EXAMPLE.COM
With a little bit of work, I can parse these results and verify them. However I'm curious whether it's ever possible that Kerberos will have two tickets from the same principal. Reading the MIT page on Kerberos usage it seems like there is only ever one ticket that would be the "initial" ticket.
Can I rely on uniqueness by principal, or do I need to check for the possibility of multiple tickets from the same principal?

It's a bit more complicated than that.
TL;DR Your 2nd TGT seems related to cross-realm authentication, see below in bold.
klist shows the tickets that are present in the default system cache:
error message if there is no such cache to query (i.e. FILE cache that does not exist, KEYRING kernel service not started, etc)
possibly 1 TGT (Ticket Granting Ticket) that asserts your identity in your own realm
possibly N service tickets that assert you are entitled to contact service X on server Z (which may belong to another realm, see below)
in the case of cross-realm authentication, some intermediate tickets that allow you to convert your TGT in realm A.R to a TGT in realm R that allows you to get a service ticket in realm B.R (that would be the default, hierarchical path used with e.g. Active Directory but custom paths may be defined in /etc/krb5.conf under [capath] or sthg like that, depending on the trusts defined between realms)
But note that not all service tickets are stored in the cache -- it is legit for an app to get the TGT from the cache, get a service ticket, and keep it private in memory. That's what Java does.
And it is legit for an app (or group of apps) to use a private cache, cf. env variable KRB5CCNAME (pretty useful when you have multiple services running under the same Linux account and don't want to mix up their SPN) so you can't see their tickets with klist unless you tap this custom cache explicitly.
And it is legit for an app to not use the cache at all, and keep all its tickets private in memory. That's what Java does when provided with a custom JAAS config that mandates to authenticate with principal/keytab.

Related

Why getting SSLCertVerificationError ... self signed certificate in certificate chain - from one machine but not another?

I am trying to test an API on my site. The tests work just fine from one machine, but running the code from a different machine results in the SSLCertVerificationError - which is odd because the site has an SSL cert and is NOT self signed.
Here is the core of my code:
async def device_connect(basename, start, end):
url = SERVER_URL
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that post
for x in range(start, end):
myDevice={'test':'this'}
post_tasks.append(do_post(session, url, myDevice))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, url, data):
async with session.post(url, data =data) as response:
x = await response.text()
I tried (just for testing) to set 'verify=False' or trust_env=True, but I continue to get the same error. On the other computer, this code runs fine and no trust issue results.
That error text is somewhat misleading. OpenSSL, which python uses, has dozens of error codes that indicate different ways certificate validation can fail, including
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN -- the peer's cert can't be chained to a root cert in the local truststore; the chain received from the peer includes a root cert, which is self-signed (because root certs must be self-signed), but that root is not locally trusted
Note this is not talking about the peer/leaf cert; if that is self signed and not trusted, there is a different error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which displays as just 'self signed certificate' without the part about 'in certificate chain'.
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (displays in text as 'unable to get local issuer certificate') -- the received chain does not contain a self-signed root and the peer's cert can't be chained to a locally trusted root
In both these cases the important info is the peer's cert doesn't chain to a trusted root; whether the received chain includes a self-signed root is less important. It's kind of like if you go to your doctor and after examination in one case s/he tells you "you have cancer, and the weather forecast for tomorrow is a bad storm" or in another case "you have cancer, but the weather forecast for tomorrow is sunny and pleasant". While these are in fact slightly different situations, and you might conceivably want to distinguish them, you need to focus on the part about "you have cancer", not tomorrow's weather.
So, why doesn't it chain to a trusted root? There are several possibilities:
the server is sending a cert chain with a root that SHOULD be trusted, but machine F is using a truststore that does not contain it. Depending on the situation, it might be appropriate to add that root cert to the default truststore (affecting at least all python apps unless specifically coded otherwise, and often other types of programs like C/C++ and Java also) or it might be better to customize the truststore for your appplication(s) only; or it might be that F is already customized wrongly and just needs to be fixed.
the server is sending a cert chain that actually uses a bad CA, but machine W's truststore has been wrongly configured (again either as a default or customized) to trust it.
machine F is not actually getting the real server's cert chain, because its connection is 'transparently' intercepted by something. This might be something authorized by an admin of the network (like an IDS/IPS/DLP or captive portal) or machine F (like antivirus or other 'endpoint security'), or it might be something very bad like malware or a thief or spy; or it might be in a gray area like some ISPs (try to) intercept connections and insert advertisements (at least in data likely to be displayed to a person like web pages and emails, but these can't always be distinguished).
the (legit) server is sending different cert chains to F (bad) and W (good). This could be intentional, e.g. because W is on a business' internal network while F is coming in from the public net; however you describe this as 'my site' and I assume you would know if it intended to make distinctions like this. OTOH it could be accidental; one fairly common cause is that many servers today use SNI (Server Name Indication) to select among several 'certs' (really cert chains and associated keys); if F is too old it might not be sending SNI, causing the server to send a bad cert chain. Or, some servers use different configurations for IPv4 vs IPv6; F could be connecting over one of these and W the other.
To distinguish these, and determine what (if anything) to fix, you need to look at what certs are actually being received by both machines.
If you have (or can get) OpenSSL on both, do openssl s_client -connect host:port -showcerts. For OpenSSL 1.1.1 up (now common) to omit SNI add -noservername; for older versions to include SNI add -servername host. Add -4 or -6 to control the IP version, if needed. This will show subject and issuer names (s: and i:) for each received cert; if any are different, and especially the last, look at #3 or #4. If the names are the same compare the whole base64 blobs to make sure they are entirely the same (it could be a well-camoflauged attacker). If they are the same, look at #1 or #2.
Alternatively, if policy and permissions allow, get network-level traces with Wireshark or a more basic tool like tcpdump or snoop. In a development environment this is usually easy; if either or both machine(s) is production, or in a supplier, customer/client, or partner environment, maybe not. Check SNI in ClientHello, and in TLS1.2 (or lower, but nowadays lower is usually discouraged or prohibited) look at the Certificate message received; in wireshark you can drill down to any desired level of detail. If both your client(s) and server are new enough to support TLS1.3 (and you can't configure it/them to downgrade) the Certificate message is encrypted and wireshark won't be able to show you the contents unless you can get at least one of your endpoints to export the session secrets in SSLKEYLOGFILE format.

What does 'System.ConfigItem.ObjectStatusEnum.Active' represent in SCOM

I query the following SCOM endpoint: OperationsManager/data/objectInformation/<object id>
Among the response properties, I receive the following property:
<MonitoringObjectProperty>
<name>Object Status</name>
<value>System.ConfigItem.ObjectStatusEnum.Active</value>
</MonitoringObjectProperty>
I want to know what this property represents. I am looking for a way to query the API to figure out if a given server is running or not (crashed/network disconnected etc) & wondering if this property represents this attribute.
It is not used in SCOM, its leftover from System Center Service Manager. Back in 2012 when they built Service Manager SCSM they used the code base from SCOM 2012. Then they merged the updated code SCSM back into SCOM (for some unknown reason) this created a bunch of useless properties and tables in the SCOM DB.
Many of these fields can still be updated manually with PowerShell but I would not recommend it.
Here is a link for more information. Using the Asset Status Property in SCOM
Here is how you can use the API to get server status. SCOM REST API to get Windows/Linux machine's availability (whether the server is running & reachable)?

Is it possibe to have multiple kerberos tickets on same machine?

I have a use case where I need to connect to 2 different DBS using 2 different accounts. And I am using Kerberos for authentication.
Is it possible to create multiple Kerberos tickets on same machine?
kinit account1#DOMAIN.COM (first ticket)
kinit account2#DOMAIN.COM (second ticket)
Whenever I do klist, I only see most recent ticket created. It doesn't show all the tickets.
Next, I have a job that needs to first use ticket for account1 (for connection to DB1) and then use ticket for account2 (for DB2).
Is that possible? How do I tell in DB connection what ticket to use?
I'm assuming MIT Kerberos and linking to those docs.
Try klist -A to show all tickets in the ticket cache. If there is only one try switching your ccache type to DIR as described here:
DIR points to the storage location of the collection of the credential caches in FILE: format. It is most useful when dealing with multiple Kerberos realms and KDCs. For release 1.10 the directory must already exist. In post-1.10 releases the requirement is for parent directory to exist and the current process must have permissions to create the directory if it does not exist. See Collections of caches for details. New in release 1.10. The following residual forms are supported:
DIR:dirname
DIR::dirpath/filename - a single cache within the directory
Switching to a ccache of the latter type causes it to become the primary for the directory.
You do this by specifying the default ccache name as DIR:/path/to/cache on one of the ways described here.
The default credential cache name is determined by the following, in descending order of priority:
The KRB5CCNAME environment variable. For example, KRB5CCNAME=DIR:/mydir/.
The default_ccache_name profile variable in [libdefaults].
The hardcoded default, DEFCCNAME.

Understanding OPC-UA Security using Eclipse Milo

I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.

No write access to $HOME in tmux after logout and login

I am not able to write to files in $HOME (on an Andrew File System) in tmux after logging out and logging in again.
(.lobster)[earth] ~/lobster >touch test
touch: setting times of `test': Permission denied
My problem seems similar to the one described here except that for me, the permissions look fine:
(.lobster)[earth] ~/lobster >ls -ld
drwxr--r-- 7 awoodard campus 2048 Mar 28 15:55 .
I've tried checking KRB5CCNAME outside of tmux and updating it to the same value inside of tmux, to no avail.
Thanks!
AFS file system implementations such as OpenAFS and AuriStorFS use AFS tokens for authentication not Kerberos tickets. AFS tokens can be obtained using Kerberos via the aklog command. When executed without parameters aklog will use the Kerberos ticket granting ticket stored in the current Kerberos credential cache to acquire an AFS token for the default workstation cell. The workstation cell can be determined using the fs wscell command.
host# fs wscell
This workstation belongs to cell 'auristor.com'
To determine if you have an AFS token for a cell use the 'tokens' command.
host# tokens
Tokens held by the Cache Manager:
Rxgk Tokens for auristor.com [Expires Apr 03 12:43]
User's (AFS ID 103) rxkad tokens for auristor.com [Expires Apr 03 12:43]
If you wish to obtain AFS tokens for a cell other than the workstation cell
host# aklog grand.central.org
Finally, you can obtain debugging output from aklog with the -d paramenter.
I hope this helps.