Configure Storm for kerberos - kerberos

I'm trying to configure a single node storm cluster to run over kerberos authentication.
Any time I try to access ui with this curl :
curl -i --negotiate -u:storm -b ~/cookiejar.txt -c ~/cookiejar.txt http://hadoop-machine1:8080/api/v1/cluster/summary
I have the following error:
HTTP ERROR: 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled).
Here is my storm configuration:
ui.header.buffer.bytes: 65536
storm.zookeeper.servers:
- "192.168.1.3"
storm.zookeeper.port: 2181
nimbus.host: "192.168.1.3"
java.library.path: "/usr/local/lib"
storm.local.dir: "/tmp/storm-data"
storm.messaging.transport: backtype.storm.messaging.netty.Context
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
- 6705
- 6706
- 6707
ui.filter: "org.apache.hadoop.security.authentication.server.AuthenticationFilter"
ui.filter.params:
"type": "kerberos"
"kerberos.principal": "HTTP/hadoop-machine1#HADOOP-MACHINE1"
"kerberos.keytab": "/vagrant/keytabs/http.keytab"
"kerberos.name.rules": "DEFAULT"
storm.thrift.transport : "backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin"
storm.principal.tolocal: "backtype.storm.security.auth.KerberosPrincipalToLocal"
storm.zookeeper.superACL: "sasl:stormc"
java.security.auth.login.config: "/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
nimbus.authorizer: "backtype.storm.security.auth.authorizer.SimpleACLAuthorizer"
nimbus.admins:
- "stormc"
nimbus.supervisor.users:
- "stormc"
nimbus.childopts: "-Xmx1024m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
ui.childopts: "-Xmx768m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
supervisor.childopts: "-Xmx256m -Djava.security.auth.login.config=/home/wouri/apache-storm-0.10.0/conf/jaas.conf"
Below is my kerberos config krb5.conf:
[libdefaults]
default_realm = HADOOP-MACHINE1
dns_lookup_realm = true
dns_lookup_kdc = true
[realms]
HADOOP-MACHINE1 = {
kdc = hadoop-machine1
admin_server = hadoop-machine1
master_key_type = aes256-cts-hmac-sha1-96
supported_enctypes = aes256-cts-hmac-sha1-96:normal aes128-cts-hmac-sha1-96:normal
}
[domain_realm]
.hadoop-machine1 = HADOOP-MACHINE1
hadoop-machine1 = HADOOP-MACHINE1
And below is jaas.conf file:
StormServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/wouri/apache-storm-0.10.0/conf/storm.keytab"
storeKey=true
useTicketCache=false
principal="stormc/hadoop-machine1#HADOOP-MACHINE1";
};
StormClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/wouri/apache-storm-0.10.0/conf/storm.keytab"
storeKey=true
useTicketCache=false
serviceName="stormc"
principal="stormc/hadoop-machine1#HADOOP-MACHINE1";
};
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/zookeeper/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="zookeeper/hadoop-machine1#HADOOP-MACHINE1";
};
Please, Is there a config flag that I am missing?

Related

Error while connecting Flume with Kafka - Could not find a 'KafkaClient' entry in the JAAS configuration

Currently we are using CDP 7.1.7 and client wants to use Flume. Since CDP has removed Flume, we need to install it as a separate application. I have installed Flume at one of the data nodes.
here are the config files:
flume-env.sh
export JAVA_OPTS="$JAVA_OPTS -Djava.security.krb5.conf=/etc/krb5.conf
-Djava.security.auth.login.config=/opt/cloudera/security/flafka_jaas.conf "
flafka_jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="flume.keytab"
principal="flume/hostname#realm";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="flume.keytab"
principal="flume/hostname#realm";
};
Flume.conf
KafkaAgent.sources = source_kafka
KafkaAgent.channels = MemChannel
KafkaAgent.sinks = LoggerSink
#Configuring Source
KafkaAgent.sources.source_kafka.type = org.apache.flume.source.kafka.KafkaSource
KafkaAgent.sources.source_kafka.kafka.bootstrap.servers = hostn1:9092,host2:9092,host3:9092
KafkaAgent.sources.source_kafka.kafka.topics = cim
KafkaAgent.sources.source_kafka.kafka.consumer.group.id = flume
KafkaAgent.sources.source_kafka.channels = MemChannel
#KafkaAgent.sources.source_kafka.kafka.consumer.timeout.ms = 100
KafkaAgent.sources.source_kafka.agent-principal=flume/hostname#realm
KafkaAgent.sources.source_kafka.agent-keytab=flume.keytab
KafkaAgent.sources.source_kafka.kafka.consumer.security.protocol = SASL_PLAINTEXT
KafkaAgent.sources.source_kafka.kafka.consumer.sasl.kerberos.service.name = kafka
KafkaAgent.sources.source_kafka.kafka.consumer.sasl.mechanism = GSSAPI
KafkaAgent.sources.source_kafka.kafka.consumer.security.protocol = SASL_PLAINTEXT
#Configuring Sink
KafkaAgent.sinks.LoggerSink.type = logger
#Configuring Channel
KafkaAgent.channels.MemChannel.type = memory
KafkaAgent.channels.MemChannel.capacity = 10000
KafkaAgent.channels.MemChannel.transactionCapacity = 1000
#bind source and sink to channel
KafkaAgent.sinks.LoggerSink.channel = MemChannel
After running this command:
`flume-ng agent -n KafkaAgent -c -conf /opt/cdpdeployment/apache-flume-1.9.0-bin/conf/ -f /opt/cdpdeployment/apache-flume-1.9.0-bin/conf/kafka-flume.conf -Dflume.root.logger=DEBUG,console`
I am getting this below error:
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:713)
Can someone tell me what I am missing as far as configs are concerned.

Why can't I login to Grafana with Keycloak integration?

I'm facing issue with Keycloak integration in Grafana:
With this grafana.ini:
instance_name = grafana
[log]
level = error
[server]
; domain = host.docker.internal
root_url = http://localhost:13000
enforce_domain = false
enable_gzip = true
[security]
admin_user = admin
admin_password = admin
[auth.generic_oauth]
name = OAuth
enabled = true
client_id = grafana
; client_secret = CLIENT_SECRET_FROM_KEYCLOAK
client_secret = <my client secret>
scopes = openid profile roles
; email_attribute_name = email:primary
auth_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/auth
token_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/token
api_url = http://<keycloak IP>/auth/realms/mcs/protocol/openid-connect/userinfo
allow_sign_up = false
disable_login_form = true
oauth_auto_login = true
tls_skip_verify_insecure = true
; Roles from Client roles in Keycloak
role_attribute_path = contains(resource_access.grafana.roles[*], 'Admin') && 'Admin' || contains(resource_access.grafana.roles[*], 'Editor') && 'Editor' || 'Viewer'
I can be redirected to Keycloak login page, but after login grafana has this error:
t=2021-10-15T11:48:58+0000 lvl=eror msg=login.OAuthLogin(NewTransportWithCode) logger=context userId=0 orgId=0 uname= error="oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"Code not valid\"}"
t=2021-10-15T11:48:58+0000 lvl=eror msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/login/generic_oauth status=500 remote_addr=172.18.0.1 time_ms=647 size=733 referer=
Keycloak configuration for grafana client:
What happens? What I am missing from configuration?
EDIT:
Grafana URL: http://localhost:13000
Keycloak logs:
16:38:09,650 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-4) Uncaught server error: java.lang.RuntimeException: cannot map type for token claim
...
...
16:38:09,942 WARN [org.keycloak.protocol.oidc.utils.OAuth2CodeParser] (default task-4) Code 'f72beb89-f814-4993-aa8f-e8debfea41ae' already used for userSession '6de1f56b-9c61-42ae-86bd-66d0ac7ad751' and client '36930d87-854f-414a-8177-c8237edf805c'.
16:38:09,944 WARN [org.keycloak.events] (default task-4) type=CODE_TO_TOKEN_ERROR, realmId=mcs, clientId=grafana, userId=null, ipAddress=172.16.1.1, error=invalid_code, grant_type=authorization_code, code_id=6de1f56b-9c61-42ae-86bd-66d0ac7ad751, client_auth_method=client-secret

Using jhipster framework to configure mongodb prompt not authorized

I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.

Failed to start Kafka after enabling Kerberos "SASL authentication failed"

Kafka version: kafka_2.1.1(binary)
When I enable the Kerberos I follow the official documents(https://kafka.apache.org/documentation/#security_sasl_kerberos) closely.
When I start the Kafka, I got the following errors:
[2019-02-23 08:55:44,622] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
[2019-02-23 08:55:44,625] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-02-23 08:55:44,746] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I use almost the default krb5.conf.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
# kdc = kerberos.example.com
# admin_server = kerberos.example.com
kdc = localhost
admin_server = localhost
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
The jaas file I passed to the Kafka is as below:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
I also set the ENV as below:
"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dzookeeper.sasl.client.username=kafka"
I have googled a lot of posts but without any progress. I guess the problem may be the "localhost" I use when I create entries in Kerberos. But I'm not quite sure how to workaround. The goal for me is to setup a local Kafka+Kerberos testing environment.
In our case, the krb5 kerberos_config file wasn't read properly. if you're using keytab thru' yml then it'd need to be removed first. This was with IBM JDK though and had to use the following to set System.setProperty("java.security.auth.login.config", JaasConfigFileLocation);
KafkaClient {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=false
credsType=both
principal="xkafka#xxx.NET"
useKeytab="/opt/apps/xxxr/my.keytab";
};

Kerberos does not issue renewable tickets

I am trying to issue a renewable ticket for my principal using a keytab (MIT KDC, Red Hat 7.4):
su - newuser
kinit -r 7d -kt /etc/security/keytabs/newuser.service.keytab newuser/mask1.myhost.com#EXAMPLE.COM
Looking at the flags:
[newuser#mask1 ~]$ klist -f
Ticket cache: FILE:/tmp/krb5cc_2824
Default principal: newuser/mask1.myhost.com#EXAMPLE.COM
Valid starting Expires Service principal
09/27/2018 09:40:32 09/28/2018 09:40:32 krbtgt/EXAMPLE.COM#EXAMPLE.COM
Flags: FI
My /etc/krb5.conf has
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = EXAMPLE.COM
ticket_lifetime = 24h
and my /var/kerberos/krb5kdc/kdc.conf
[realms]
EXAMPLE.COM = {
#master_key_type = aes256-cts
max_renewable_life = 7d 0h 0m 0s
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
default_principal_flags = +renewable
}
What am I missing to get a renewable ticket?
Update:
I was able to make my tickets renewable by doing
kadmin
modprinc -maxrenewlife 7d krbtgt/EXAMPLE.COM#EXAMPLE.COM
modprinc -maxrenewlife 7d +allow_renewable newuser/mask1.myhost.com#EXAMPLE.COM
but this means I would need to do it for every principal. How do I make it so that all tickets are generated as renewable by default?
You can set the default (as renew_lifetime) in the [realms] section of the krb5.conf file.