Failed to start Kafka after enabling Kerberos "SASL authentication failed" - apache-kafka

Kafka version: kafka_2.1.1(binary)
When I enable the Kerberos I follow the official documents(https://kafka.apache.org/documentation/#security_sasl_kerberos) closely.
When I start the Kafka, I got the following errors:
[2019-02-23 08:55:44,622] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
[2019-02-23 08:55:44,625] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-02-23 08:55:44,746] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I use almost the default krb5.conf.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
# kdc = kerberos.example.com
# admin_server = kerberos.example.com
kdc = localhost
admin_server = localhost
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
The jaas file I passed to the Kafka is as below:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
I also set the ENV as below:
"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dzookeeper.sasl.client.username=kafka"
I have googled a lot of posts but without any progress. I guess the problem may be the "localhost" I use when I create entries in Kerberos. But I'm not quite sure how to workaround. The goal for me is to setup a local Kafka+Kerberos testing environment.

In our case, the krb5 kerberos_config file wasn't read properly. if you're using keytab thru' yml then it'd need to be removed first. This was with IBM JDK though and had to use the following to set System.setProperty("java.security.auth.login.config", JaasConfigFileLocation);
KafkaClient {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=false
credsType=both
principal="xkafka#xxx.NET"
useKeytab="/opt/apps/xxxr/my.keytab";
};

Related

TLS/SSL in pgbouncer - FATAL TLS setup failed: failed to load CA

I'm trying to set up pgbouncer to require a TLS/SSL connection from the applications connecting to it, but it throws an error "FATAL TLS setup failed: failed to load CA"
This is my pgbouncer.ini:
[databases]
* = host=${postgres_host} port=5432
[pgbouncer]
# Do not change these settings:
listen_addr = 0.0.0.0
auth_file = /etc/pgbouncer/userlist.txt
auth_type = trust
client_tls_sslmode = require
client_tls_key_file = /etc/pgbouncer/server.key
client_tls_cert_file = /etc/pgbouncer/server.crt
server_tls_sslmode = verify-ca
server_tls_ca_file = /etc/root.crt.pem
# These are defaults and can be configured
# please leave them as defaults if you are
# uncertain.
listen_port = 5432
unix_socket_dir =
user = postgres
pool_mode = transaction
max_client_conn = 100
ignore_startup_parameters = extra_float_digits
admin_users = postgres
# Please add any additional settings below this line
but running it it throws this error, which doesn't seem correct since a CA root file is not needed.
FATAL TLS setup failed: failed to load CA: No such file or directory
p.s. It threw the error also before I had server_tlsmode = verify-ca

Error while connecting Flume with Kafka - Could not find a 'KafkaClient' entry in the JAAS configuration

Currently we are using CDP 7.1.7 and client wants to use Flume. Since CDP has removed Flume, we need to install it as a separate application. I have installed Flume at one of the data nodes.
here are the config files:
flume-env.sh
export JAVA_OPTS="$JAVA_OPTS -Djava.security.krb5.conf=/etc/krb5.conf
-Djava.security.auth.login.config=/opt/cloudera/security/flafka_jaas.conf "
flafka_jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="flume.keytab"
principal="flume/hostname#realm";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="flume.keytab"
principal="flume/hostname#realm";
};
Flume.conf
KafkaAgent.sources = source_kafka
KafkaAgent.channels = MemChannel
KafkaAgent.sinks = LoggerSink
#Configuring Source
KafkaAgent.sources.source_kafka.type = org.apache.flume.source.kafka.KafkaSource
KafkaAgent.sources.source_kafka.kafka.bootstrap.servers = hostn1:9092,host2:9092,host3:9092
KafkaAgent.sources.source_kafka.kafka.topics = cim
KafkaAgent.sources.source_kafka.kafka.consumer.group.id = flume
KafkaAgent.sources.source_kafka.channels = MemChannel
#KafkaAgent.sources.source_kafka.kafka.consumer.timeout.ms = 100
KafkaAgent.sources.source_kafka.agent-principal=flume/hostname#realm
KafkaAgent.sources.source_kafka.agent-keytab=flume.keytab
KafkaAgent.sources.source_kafka.kafka.consumer.security.protocol = SASL_PLAINTEXT
KafkaAgent.sources.source_kafka.kafka.consumer.sasl.kerberos.service.name = kafka
KafkaAgent.sources.source_kafka.kafka.consumer.sasl.mechanism = GSSAPI
KafkaAgent.sources.source_kafka.kafka.consumer.security.protocol = SASL_PLAINTEXT
#Configuring Sink
KafkaAgent.sinks.LoggerSink.type = logger
#Configuring Channel
KafkaAgent.channels.MemChannel.type = memory
KafkaAgent.channels.MemChannel.capacity = 10000
KafkaAgent.channels.MemChannel.transactionCapacity = 1000
#bind source and sink to channel
KafkaAgent.sinks.LoggerSink.channel = MemChannel
After running this command:
`flume-ng agent -n KafkaAgent -c -conf /opt/cdpdeployment/apache-flume-1.9.0-bin/conf/ -f /opt/cdpdeployment/apache-flume-1.9.0-bin/conf/kafka-flume.conf -Dflume.root.logger=DEBUG,console`
I am getting this below error:
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:713)
Can someone tell me what I am missing as far as configs are concerned.

Why can't I login to Grafana with Keycloak integration?

I'm facing issue with Keycloak integration in Grafana:
With this grafana.ini:
instance_name = grafana
[log]
level = error
[server]
; domain = host.docker.internal
root_url = http://localhost:13000
enforce_domain = false
enable_gzip = true
[security]
admin_user = admin
admin_password = admin
[auth.generic_oauth]
name = OAuth
enabled = true
client_id = grafana
; client_secret = CLIENT_SECRET_FROM_KEYCLOAK
client_secret = <my client secret>
scopes = openid profile roles
; email_attribute_name = email:primary
auth_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/auth
token_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/token
api_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/userinfo
allow_sign_up = false
disable_login_form = true
oauth_auto_login = true
tls_skip_verify_insecure = true
; Roles from Client roles in Keycloak
role_attribute_path = contains(resource_access.grafana.roles[*], 'Admin') && 'Admin' || contains(resource_access.grafana.roles[*], 'Editor') && 'Editor' || 'Viewer'
I can be redirected to Keycloak login page, but after login grafana has this error:
t=2021-10-15T11:48:58+0000 lvl=eror msg=login.OAuthLogin(NewTransportWithCode) logger=context userId=0 orgId=0 uname= error="oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"Code not valid\"}"
t=2021-10-15T11:48:58+0000 lvl=eror msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/login/generic_oauth status=500 remote_addr=172.18.0.1 time_ms=647 size=733 referer=
Keycloak configuration for grafana client:
What happens? What I am missing from configuration?
EDIT:
Grafana URL: http://localhost:13000
Keycloak logs:
16:38:09,650 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-4) Uncaught server error: java.lang.RuntimeException: cannot map type for token claim
...
...
16:38:09,942 WARN [org.keycloak.protocol.oidc.utils.OAuth2CodeParser] (default task-4) Code 'f72beb89-f814-4993-aa8f-e8debfea41ae' already used for userSession '6de1f56b-9c61-42ae-86bd-66d0ac7ad751' and client '36930d87-854f-414a-8177-c8237edf805c'.
16:38:09,944 WARN [org.keycloak.events] (default task-4) type=CODE_TO_TOKEN_ERROR, realmId=mcs, clientId=grafana, userId=null, ipAddress=172.16.1.1, error=invalid_code, grant_type=authorization_code, code_id=6de1f56b-9c61-42ae-86bd-66d0ac7ad751, client_auth_method=client-secret

Can't Connect MongoDB With SSL in Azure machine with another MVC application in Azure

I'm having problem in connecting MongoDB which is configured using SSL. I have MongoDB enterprise server in Azure virtual machine which has the following configuration.
net:
bindIp: 0.0.0.0
port: 27017
ssl:
CAFile: 'C:\openssl-0.9.8h-1-bin\bin\rCA.pem'
PEMKeyFile: 'C:\openssl-0.9.8h-1-bin\bin\rser.pem'
allowConnectionsWithoutCertificates: false
allowInvalidHostnames: true
mode: requireSSL
storage:
dbPath: 'C:\data\db'
I have a C# sample to connect mongodb with certificate data passed as byte array.
MongoClientSettings settings = new MongoClientSettings
{
Server = new MongoServerAddress("mongo_azure_host", 27017),
UseSsl = true,
RetryWrites = true
};
settings.VerifySslCertificate = false;
var SslCertificateData = FilePathHelper.ReadFile(Server, mySslClientCertificate);
var certificate = new X509Certificate2(SslCertificateData, "pwd");
settings.SslSettings = new SslSettings()
{
ClientCertificates = new[] { certificate }
};
}
MongoClient mongoClient = new MongoClient(settings);
mongoClient.GetServer().Connect();
This works fine if the sample is in my local environment. But if I pubish the same in Azure web app and tried to connect, it throws the following exception
system.componentmodel.win32exception: the credentials supplied to the package were not recognized

Configure Storm for kerberos

I'm trying to configure a single node storm cluster to run over kerberos authentication.
Any time I try to access ui with this curl :
curl -i --negotiate -u:storm -b ~/cookiejar.txt -c ~/cookiejar.txt http://hadoop-machine1:8080/api/v1/cluster/summary
I have the following error:
HTTP ERROR: 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled).
Here is my storm configuration:
ui.header.buffer.bytes: 65536
storm.zookeeper.servers:
- "192.168.1.3"
storm.zookeeper.port: 2181
nimbus.host: "192.168.1.3"
java.library.path: "/usr/local/lib"
storm.local.dir: "/tmp/storm-data"
storm.messaging.transport: backtype.storm.messaging.netty.Context
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
- 6705
- 6706
- 6707
ui.filter: "org.apache.hadoop.security.authentication.server.AuthenticationFilter"
ui.filter.params:
"type": "kerberos"
"kerberos.principal": "HTTP/hadoop-machine1#HADOOP-MACHINE1"
"kerberos.keytab": "/vagrant/keytabs/http.keytab"
"kerberos.name.rules": "DEFAULT"
storm.thrift.transport : "backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin"
storm.principal.tolocal: "backtype.storm.security.auth.KerberosPrincipalToLocal"
storm.zookeeper.superACL: "sasl:stormc"
java.security.auth.login.config: "/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
nimbus.authorizer: "backtype.storm.security.auth.authorizer.SimpleACLAuthorizer"
nimbus.admins:
- "stormc"
nimbus.supervisor.users:
- "stormc"
nimbus.childopts: "-Xmx1024m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
ui.childopts: "-Xmx768m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
supervisor.childopts: "-Xmx256m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
Below is my kerberos config krb5.conf:
[libdefaults]
default_realm = HADOOP-MACHINE1
dns_lookup_realm = true
dns_lookup_kdc = true
[realms]
HADOOP-MACHINE1 = {
kdc = hadoop-machine1
admin_server = hadoop-machine1
master_key_type = aes256-cts-hmac-sha1-96
supported_enctypes = aes256-cts-hmac-sha1-96:normal aes128-cts-hmac-sha1-96:normal
}
[domain_realm]
.hadoop-machine1 = HADOOP-MACHINE1
hadoop-machine1 = HADOOP-MACHINE1
And below is jaas.conf file:
StormServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/wouri/apache-storm-0.10.0/conf/storm.keytab"
storeKey=true
useTicketCache=false
principal="stormc/hadoop-machine1#HADOOP-MACHINE1";
};
StormClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/wouri/apache-storm-0.10.0/conf/storm.keytab"
storeKey=true
useTicketCache=false
serviceName="stormc"
principal="stormc/hadoop-machine1#HADOOP-MACHINE1";
};
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/zookeeper/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="zookeeper/hadoop-machine1#HADOOP-MACHINE1";
};
Please, Is there a config flag that I am missing?