Kerberos authentication fails, "Configuration file does not specify default realm" - kerberos

I am attempting to set up Kerberos authentication with Freeradius. At the moment when I run radtest the authentication fails and I get the following error in my logs
(0) Login incorrect (krb5: Failed parsing username as principal: Configuration file does not specify default realm): [user/Password123] (from client localhost port 1812)
In my krb5 configuration file I have specified a service principal so I am unsure why I am getting this error. Here is a snippet for context (sensitive values modified)
krb5 {
#
# The keytab file MUST be owned by the UID/GID used by the server.
# The keytab file MUST be writable by the server.
# The keytab file MUST NOT be readable by other users on the system.
# The keytab file MUST exist before the server is started.
#
keytab = /etc/raddb/mykeytab.keytab
service_principal = http/princ#example.com
Is there anything wrong with this configuration? Or am I looking in the wrong place?

You need to either include the realm with the principal you're logging in as, or set a default realm in krb5.conf (should be in /etc/, but it might be distro-specific).
See here:
default_realm Identifies the default Kerberos realm for the client.
Set its value to your Kerberos realm. If this value is not set, then a
realm must be specified with every Kerberos principal when invoking
programs such as kinit.

Related

Redis NOAUTH authentication required error kubernatives

We are getting error NOAUTH Authentication required when deploying in kubernatives cluster.
we need to remove the password which has set automatically. but we don't know from where to change as no config file is there.

Setting up HTTPS/SSL for Keycloak 17+

Does Keycloak 17 and above powered by Quarkus distribution has standalone mode?
The keycloak documentation says, that i can still use it, to setting up HTTPS/SSL. In the documentation there is a procedure, to edit the standalone.xml file, that no longer exist in this new version of keycloak.
Does standalone mode still exist? Or is there a different documentation in this not deprecated, new version that should be used? How to set up HTTPS/SSL then?
See https://www.keycloak.org/server/all-config?q=https
Use these parameters to customize TLS configuration based on your needs:
https-certificate-file
The file path to a server certificate or certificate chain in PEM format.
https-certificate-key-file
The file path to a private key in PEM format.
https-cipher-suites
The cipher suites to use.
https-client-auth
Configures the server to require/request client authentication.
https-key-store-file
The key store which holds the certificate information instead of specifying separate files.
https-key-store-password
The password of the key store file.
https-key-store-type
The type of the key store file.
https-port
The used HTTPS port.
https-protocols
The list of protocols to explicitly enable.
https-trust-store-file
The trust store which holds the certificate information of the certificates to trust.
https-trust-store-password
The password of the trust store file.
https-trust-store-type
The type of the trust store file.
Container deployement has also support for TLS, see Keycloak Docker HTTPS required

Use of keytabs and service principals

I'm struggling with is the use of keytabs and service principals. I always thought keytabs were a combination of an encrypted password and Kerberos principal. For services or hosts, however there are no actual passwords unless I'm missing something.
In my case, I'm trying to use config Apache Kafka to run with Kerberos to Active Directory. Specifically I joined AD and created the SPN using the adcli command. I can create a keytab using the ktutil command for the service principal. However it prompts me for the password. I've tested using a service account password and the root password with no luck.
Any suggestions would be greatly appreciated.
I'm struggling with is the use of keytabs and service principals. I always thought keytabs were a combination of an encrypted password and Kerberos principal. For services or hosts, however there are no actual passwords unless I'm missing something.
There can be a password. Windows AD member hosts indeed have a "machine password" that is stored in the Windows LSA secrets storage – which is why they don't need a keytab. Similarly, "service" accounts1 in AD are just user accounts that have a (hopefully!) randomized password.
But to Kerberos, the original password doesn't actually matter. Only the derived key is what gets used. This means the key doesn't have to be derived from a password, and indeed in "traditional" MIT/Heimdal Kerberos the whole key is randomly-generated for host or service principals.
(Note that I said "derived", not "encrypted". For password inputs, the keys are the result of a one-way hash. Kerberos uses PBKDF2 for AES keys, while RC4-HMAC keys are "NTLM hashes".)
1 Oh, and don't try to re-use the "machine" account created via AD join for Kafka. Create a separate "user" account and assign it the SPN.
In my case, I'm trying to use config Apache Kafka to run with Kerberos to Active Directory. Specifically I joined AD and created the SPN using the adcli command. I can create a keytab using the ktutil command for the service principal. However it prompts me for the password. I've tested using a service account password and the root password with no luck.
ktutil could work, but there are two big issues to note:
First, when deriving the AES keys (using PBKDF2), you must use the correct salt. In Kerberos it can often be derived from the principal name, but not always; e.g. a renamed account will retain its original pre-rename salt. And in AD, all SPNs assigned to a custom user account will use that user's main account name as the salt, as the KDC knows about this relationship – but ktutil doesn't!
So you should always use the -f option to addent, which actually asks the KDC for the correct salt (just like kinit also would).
(RC4-HMAC keys use an NTLM hash, which is not salted. One of the reasons RC4-HMAC should become obsolete as soon as possible, and why you should also manually check whether the service account is indeed flagged for AES support.)
Second, you have to add keys for the same enctypes (algorithms) that the KDC thinks the service will support.
Fresh AD accounts are only considered to support arcfour-hmac (obsolete-ish), but I believe adcli sets the correct flags to indicate support for aes256 and aes128 as well. This means you need to repeat ktutil's addent 3 times, with different -e options.
ktutil: addent -password -p kafka/foo.example.tld#EXAMPLE.TLD -k 1 -e aes256-cts -f
ktutil: addent -password -p kafka/foo.example.tld#EXAMPLE.TLD -k 1 -e aes128-cts -f
ktutil: addent -password -p kafka/foo.example.tld#EXAMPLE.TLD -k 1 -e arcfour-hmac -f
ktutil: wkt kafka.keytab
It might be easier to find a Windows host with RSAT installed, which has ktpass.exe that'll create a keytab for you.
(And unfortunately, samba-tool domain exportkeytab is not an option, as it assumes it'll be run directly on a Samba-based AD DC and wants to access the raw database and extract the current keys, instead of doing the normal thing and going via LDAP.)

Keycloak Gatekeeper always fail to validate 'iss' claim value

Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?
It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.
Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved

CQ Basic Authentication

i have a requirement to implement basic authentication at dispatcher side
I have below basic auth configuration in my virtual host(www.abc.com) configuration file.
<Location /content/abc/jp-JP >
AuthType basic
AuthName "private area"
AuthBasicProvider file
AuthUserFile /opt/cq/www/htdocs/password(this is name of file, contains uname and password)
Require valid-user
</Location>
when i try to access www.abc.com/jp-JP getting basic auth prompt and authenticated succefully from password file(username and password file ) located under /opt/cq/www/htdocs . after first prompt successfully validated username and password , second prompt displaying with requires username and password .The server Says (Sling development). if i disable basic authentication in apache sling authentication service of publish instance ..then it's redirecting me to correct page what i expected ..but unable to publish contents from author(blocked inside replication agent queue). so enabled back..but basic authentication blocked with The server Says (Sling development).
I am sure it is difficult to understand what i am trying to say here ..but any idea how to by pass prompt of "The server Says (Sling development)" from dispatcher level basic auth. Any help would be appreciated!!!
Let me paraphrase your description: you have setup HTTP Basic Auth on the Apache level and it works fine, but the credentials entered in the browser are sent not only to the Apache but also to the CQ. CQ treats credentials as its own username and password and returns error. Disabling HTTP Basic Authentication Handler authenticator is not an option, as it's used by the replication process.
In order to make Apache HTTP Basic and the CQ publish coexists, you can remove the Authorization header (used in the HTTP Basic Auth) on the Apache, using mod_headers module and its RequestHeader directive. Enable the mod_headers and place following line in your VirtualHost configuration:
RequestHeader unset Authorization
Apache will use the header to authenticate the request, but then it'll be removed and CQ won't get it.