I tried a simple sample code to test access to a "kerberized" Kafka from Quarkus 2.2.2 with smallrye-reactive-messaging-kafka :
package org.acme;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.eclipse.microprofile.reactive.messaging.Incoming;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class MyTopicConsumer {
#Incoming("in")
public void consume(ConsumerRecord<String, String> record) {
System.out.println("read from Kafka : " + record.value() ) ;
}
}
Kafkas is behind Kerberos, so i used an application.properties like this :
quarkus.ssl.native=true
quarkus.native.enable-all-security-services=true
mp.messaging.incoming.in.group.id=my-group
mp.messaging.incoming.in.auto.commit.interval.ms=1000
mp.messaging.incoming.in.security.protocol=SASL_SSL
mp.messaging.incoming.in.sasl.kerberos.service.name=kafka
mp.messaging.incoming.in.sasl.mechanism=GSSAPI
mp.messaging.incoming.in.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule "required" doNotPrompt=true useKeyTab=true storeKey=true serviceName="kafka" keyTab="<keytab>" principal="<principal>" useTicketCache=false;
mp.messaging.incoming.in.ssl.truststore.location=<location>
mp.messaging.incoming.in.ssl.truststore.password=<password>
mp.messaging.incoming.in.connector=smallrye-kafka
mp.messaging.incoming.in.topic=<topic>
mp.messaging.incoming.in.auto.offset.reset=earliest
mp.messaging.incoming.in.enable.auto.commit=false
mp.messaging.incoming.in.bootstrap.servers=<list of servers>
It works nicely in jvm mode, but fails in native mode (graalvm-ce-java11-21.2.0) with this error :
ERROR [io.sma.rea.mes.provider] (main) SRMSG00230: Unable to create the publisher or subscriber during initialization: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:665)
at io.smallrye.reactive.messaging.kafka.impl.ReactiveKafkaConsumer.<init>(ReactiveKafkaConsumer.java:80)
at io.smallrye.reactive.messaging.kafka.impl.KafkaSource.<init>(KafkaSource.java:85)
at io.smallrye.reactive.messaging.kafka.KafkaConnector.getPublisherBuilder(KafkaConnector.java:182)
at io.smallrye.reactive.messaging.kafka.KafkaConnector_ClientProxy.getPublisherBuilder(KafkaConnector_ClientProxy.zig:159)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.createPublisherBuilder(ConfiguredChannelFactory.java:190)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.register(ConfiguredChannelFactory.java:153)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.initialize(ConfiguredChannelFactory.java:125)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory_ClientProxy.initialize(ConfiguredChannelFactory_ClientProxy.zig:189)
at java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at io.smallrye.reactive.messaging.extension.MediatorManager.start(MediatorManager.java:189)
at io.smallrye.reactive.messaging.extension.MediatorManager_ClientProxy.start(MediatorManager_ClientProxy.zig:220)
at io.quarkus.smallrye.reactivemessaging.runtime.SmallRyeReactiveMessagingLifecycle.onApplicationStart(SmallRyeReactiveMessagingLifecycle.java:41)
at io.quarkus.smallrye.reactivemessaging.runtime.SmallRyeReactiveMessagingLifecycle_Observer_onApplicationStart_4e8937813d9e8faff65c3c07f88fa96615b70e70.notify(SmallRyeReactiveMessagingLifecycle_Observer_onApplicationStart_4e8937813d9e8faff65c3c07f88fa96615b70e70.zig:111)
at io.quarkus.arc.impl.EventImpl$Notifier.notifyObservers(EventImpl.java:300)
at io.quarkus.arc.impl.EventImpl$Notifier.notify(EventImpl.java:282)
at io.quarkus.arc.impl.EventImpl.fire(EventImpl.java:70)
at io.quarkus.arc.runtime.ArcRecorder.fireLifecycleEvent(ArcRecorder.java:128)
at io.quarkus.arc.runtime.ArcRecorder.handleLifecycleEvents(ArcRecorder.java:97)
at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy_0(LifecycleEventsBuildStep$startupEvent1144526294.zig:87)
at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy(LifecycleEventsBuildStep$startupEvent1144526294.zig:40)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:623)
at io.quarkus.runtime.Application.start(Application.java:101)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:101)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:66)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:42)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:119)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Could not find a public no-argument constructor for org.apache.kafka.common.security.kerberos.KerberosLogin
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:737)
... 30 more
I tried a few changes suggested by some posts, but with no effect.
Can anyone suggest how to fix or workaround this ?
Thanks
It is seems that the code doesn't contain the constructor org.apache.kafka.common.security.kerberos.KerberosLogin.
Have you try to add the class as describe in https://quarkus.io/guides/writing-native-applications-tips#registering-for-reflection
Maybe you need to add line in your configuration file
quarkus.native.additional-build-args=-H:ReflectionConfigurationFiles=reflection-config.json
And add the class org.apache.kafka.common.security.kerberos.KerberosLogin in reflection-config.json as describe here https://quarkus.io/guides/writing-native-applications-tips#using-a-configuration-file
Related
My goal is to produce events in 2 different channels using distinct bootstrap servers using jaas configuration using SASL_SSL, but I am not able to setup the channels to authenticate correctly on the bootstrap servers.
I've tried the following setup
mp.messaging.outgoing.channel1.bootstrap.servers=${KAFKA1}
mp.messaging.outgoing.channel1.ssl.endpoint-identification-algorithm=https
mp.messaging.outgoing.channel1.security.protocol=SASL_SSL
mp.messaging.outgoing.channel1.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${KEY1}" password="${PWD1}";
mp.messaging.outgoing.channel1.sasl.mechanism=PLAIN
mp.messaging.outgoing.channel2.bootstrap.servers=${KAFKA2}
mp.messaging.outgoing.channel2.ssl.endpoint-identification-algorithm=https
mp.messaging.outgoing.channel2.security.protocol=SASL_SSL
mp.messaging.outgoing.channel2.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${KEY2}" password="${PWD2}";
mp.messaging.outgoing.channel2.sasl.mechanism=PLAIN
Using this setup I am receiving errors on the channel initialization.
2023-01-18 13:57:10 13:57:10.445 ERROR [Application] (main) Failed to start application (with profile prod): java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
2023-01-18 13:57:10 at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
2023-01-18 13:57:10 at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
2023-01-18 13:57:10 at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:82)
2023-01-18 13:57:10 at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:167)
2023-01-18 13:57:10 at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
2023-01-18 13:57:10 at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
The initial setup used the default bootstrap settings and it worked fined until KAFKA was brought to the equation.
kafka.bootstrap.servers='${KAFKA1}'
kafka.ssl.endpoint-identification-algorithm=https
kafka.security.protocol=SASL_SSL
kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${Key1}" password="${PWD1}";
kafka.sasl.mechanism=PLAIN
I've tried the described on the issue and I am not being able figure out how to configure the channels to authenticate to 2 different bootstrap servers.
As the error says, you need a JAAS conf set in your JVM system properties
-Djava.security.auth.login.config=/path/to/kafka-jaas.conf
After reading the documentation [https://quarkus.io/guides/kafka] the option
1
kafka-configuration Allows the config1
Therefore the solution is to implement a provider bean
#ApplicationScoped
#Slf4j
public class KafkaConfigBean {
#Produces
#Identifier("kafka1")
public Map<String, Object> kafkaConfig() {
HashMap<String, Object> config = new HashMap<>();
config.put("security.protocol", "SASL_SSL");
config.put("sasl.mechanism", "PLAIN");
String saslConfig = String.format("org.apache.kafka.common.security.plain.PlainLoginModule required username=\"%S\" password=\"%s\";",
System.getenv("KEY1"), System.getenv("PWD1"));
config.put("sasl.jaas.config", saslConfig);
log.info("Initialized Kafka 1 config");
return config;
}
#Produces
#Identifier("kafka2")
public Map<String, Object> kafkaConfigPTT() {
HashMap<String, Object> config = new HashMap<>();
config.put("security.protocol", "SASL_SSL");
config.put("sasl.mechanism", "PLAIN");
String saslConfig = String.format("org.apache.kafka.common.security.plain.PlainLoginModule required username=\"%S\" password=\"%s\";",
System.getenv("KEY2"), System.getenv("PWD2"));
config.put("sasl.jaas.config", saslConfig);
log.info("Initialized Kafka 2 config");
return config;
}
}
Thus resulting on the following configuration file
mp.messaging.outgoing.channel1.bootstrap.servers=\${KAFKA1}
mp.messaging.outgoing.channel1.kafka-configuration=kafka1
mp.messaging.outgoing.channel2.bootstrap.servers=\${KAFKA2}
mp.messaging.outgoing.channel2.kafka-configuration=kafka2
I am trying to publish an Avro message to a Kafka topic using JMeter.
I get the below error message:
Caused by: javax.script.ScriptException: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema"string"
I used the following code using JSR223 sampler.
KAFKA_BROKERS, KAFKA_TOPIC & MESSAGE are passed in User Defined variables.
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
String brokers = vars.get("KAFKA_BROKERS");
String topic = vars.get("KAFKA_TOPIC");
String user = String.valueOf(ctx.getThreadNum() + 1);
Object msg = vars.get("MESSAGE");
Properties kafkaProps = new Properties();
kafkaProps.put("bootstrap.servers", brokers);
kafkaProps.put("schema.registry.url","https://\<\>");
kafkaProps.put("auto.register.schemas","false");
kafkaProps.put("basic.auth.credentials.source","USER_INFO");
kafkaProps.put("basic.auth.user.info","\<\>");
kafkaProps.put("security.protocol","SASL_SSL");
kafkaProps.put("sasl.mechanism","PLAIN");
kafkaProps.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
kafkaProps.put("value.serializer","io.confluent.kafka.serializers.KafkaAvroSerializer");
kafkaProps.put("sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule required username='\<\>' password='\<\>';");
Producer\<String, Object\> producer = new KafkaProducer\<\>(kafkaProps);
try
{
producer.send(new ProducerRecord\<String, Object\>(topic, user, msg)).get();
}
finally
{
producer.close();
}
Getting the below error message:
Caused by: javax.script.ScriptException: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema"string"
You need to configure you kafkaProps object exactly the same way like it's done in your upstream system you're trying to mimic.
As a workaround you could enable auto schema registration like:
kafkaProps.put("auto.register.schemas","true");
More information: How to Do Kafka Testing With JMeter
Im trying to connect to Kafka with JMS. I followed this guide to use the Payara Kafka Connector. This worked on Wildfly. But I cant get it to work on OpenLiberty.
The server.xml:
<resourceAdapter id="kafkajmsra" location="${shared.resource.dir}kafka-rar-0.5.0.rar"/>
<jmsTopicConnectionFactory jndiName="JMSTopicFactory">
<properties.kafkajmsra
bootstrapServerConfig="kafka:9092"/>
</jmsTopicConnectionFactory>
<jmsTopic id="kafkaTopic" jndiName="JmsTopic">
<properties.kafkajmsra topicName="demoTopic" />
</jmsTopic>
With those configurations I get a NullPointerException if I try to inject those components. The JNDI names can be found but not with these parameters.
#Resource(lookup = "JMSTopicFactory")
private TopicConnectionFactory jmsTopicFactory;
#Resource(lookup = "JMSTopic")
private Topic jmsTopic;
Am I missing something in the server.xml?
I tried using the default JMS Connector. It does connect to Kafka, but the connection gets refused and on the kafka side it tells me this:
[2020-05-31 20:05:27,134] WARN [SocketServer brokerId=1] Unexpected error from /172.20.0.4; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = -1091633152)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:103)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at kafka.network.Processor.poll(SocketServer.scala:893)
at kafka.network.Processor.run(SocketServer.scala:792)
at java.lang.Thread.run(Thread.java:748)
EDIT:
I changed the server.xml to look like this now:
<resourceAdapter id="kafkajmsra" location="${shared.resource.dir}/kafka-rar-0.4.0.rar"/>
<connectionFactory jndi="java:app/KafkaConnectionFactory"
interfaceName="fish.payara.cloud.connectors.kafka.api.KafkaConnectionFactory"
resourceAdapter="liberty/wlp/usr/shared/resources/kafka-rar-0.4.0.rar">
</connectionFactory>
and the java code looks like this:
#ApplicationScoped
public class TopicProducer {
private static final Logger LOG = LoggerFactory.getLogger(TopicProducer.class);
public TopicProducer() throws Exception {
LOG.info("Starting TopicProducer");
}
#Resource(lookup = "java:app/KafkaConnectionFactory")
KafkaConnectionFactory kafkaConnectionFactory;
public void send(final String msg) {
try (KafkaConnection connection = kafkaConnectionFactory.createConnection()) {
LOG.info("Send message: {}", msg);
connection.send(new ProducerRecord("demoTopic", msg));
} catch (Exception e) {
LOG.error(e.getMessage(), e);
}
}
}
But now I get a NullPointerException on the #Resource. My guess is that the resource adapter cannot be found.
I'm trying to do some PoC on "exactly one delivery" concept with Apache Kafka using Spring Cloud Streams + Kafka Binding.
I installed Apache Kafka "kafka_2.11-1.0.0" and defined "transactionIdPrefix" in the producer, which I understand is the only thing I need to do to enable transactions in Spring Kafka, but when I do that and run simple Source & Sink bindings within the same application, I see some messages are received and printed in the consumer and some get an error.
For example, message #6 received:
[49] Received message [Payload String content=FromSource1 6][Headers={kafka_offset=1957, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#6695c9a9, kafka_timestampType=CREATE_TIME, my-transaction-id=my-id-6, id=302cf3ef-a154-fd42-6b43-983778e275dc, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTopic=test10, kafka_receivedTimestamp=1514384106395, timestamp=1514384106419}]
but message #7 had an error "Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION":
2017-12-27 16:15:07.405 ERROR 7731 --- [ask-scheduler-4] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#7d3bbc0b]; nested exception is org.apache.kafka.common.KafkaException: TransactionalId my-transaction-3: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION, failedMessage=GenericMessage [payload=byte[13], headers={my-transaction-id=my-id-7, id=d31656af-3286-99b0-c736-d53aa57a5e65, contentType=application/json, timestamp=1514384107399}]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:153)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler.handleMessageInternal(AbstractMessageChannelBinder.java:575)
What does this error means?
Is Something missing with my configuration?
Do I need to implement my the Source or the Sink differently when transactions is enabled?
UPDATE:
I opened an issue on the project's github, please refer to the discussion there.
Couldn't find an example of how to use Spring Cloud Stream with Kafka binding + Trasanctions enabled
To reproduce, need to created a simple maven project with spring boot version "2.0.0.M5" and "spring-cloud-stream-dependencies" version "Elmhurst.M3", and to created a simple application with this configuration:
server:
port: 8082
spring:
kafka:
producer:
retries: 5555
acks: "all"
cloud:
stream:
kafka:
binder:
autoAddPartitions: true
transaction:
transactionIdPrefix: my-transaction-
bindings:
output1:
destination: test10
group: test111
binder: kafka
input1:
destination: test10
group: test111
binder: kafka
consumer:
partitioned: true
I also created simple Source and Sink classes:
#EnableBinding(SampleSink.MultiInputSink.class)
public class SampleSink {
#StreamListener(MultiInputSink.INPUT1)
public synchronized void receive1(Message<?> message) {
System.out.println("["+Thread.currentThread().getId()+"] Received message " + message);
}
public interface MultiInputSink {
String INPUT1 = "input1";
#Input(INPUT1)
SubscribableChannel input1();
}
}
and:
#EnableBinding(SampleSource.MultiOutputSource.class)
public class SampleSource {
AtomicInteger atomicInteger = new AtomicInteger(1);
#Bean
#InboundChannelAdapter(value = MultiOutputSource.OUTPUT1, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public synchronized MessageSource<String> messageSource1() {
return new MessageSource<String>() {
public Message<String> receive() {
String message = "FromSource1 "+atomicInteger.getAndIncrement();
m.put("my-transaction-id","my-id-"+ UUID.randomUUID());
return new GenericMessage(message, new MessageHeaders(m));
}
};
}
public interface MultiOutputSource {
String OUTPUT1 = "output1";
#Output(OUTPUT1)
MessageChannel output1();
}
}
I opened a ticket on that to the project's github.
Please refer to the answers and discussion there:
https://github.com/spring-cloud/spring-cloud-stream/issues/1166
but the first answer there was:
The binder doesn't currently support producer-initiated transactions.
Transactions are supported for processors (where the consumer starts
the transaction and the producer participates in that transaction).
You should be able to use spring-kafka directly to initiate a
transaction on the producer side when there is no consumer.
when i start rmiserver implementation class it displays this error message
Remote exception: java.rmi.ServerException: RemoteException occurred in server t
hread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested excep
tion is:
java.lang.ClassNotFoundException: RMIServerImpl_Stub
commands ran
start rmiregistry
start java -Djava.security.policy=policyfile RMIServerImpl
what can i do to resolve this. Please help
This is my rmi server code
import java.rmi.*;
import java.rmi.server.*;
import java.rmi.registry.*;
public class RMIServerImpl extends UnicastRemoteObject
implements RMIServer {
RMIServerImpl() throws RemoteException {
super();
}
public static void main(String args[]) {
try {
System.setSecurityManager(new RMISecurityManager());
RMIServerImpl Server = new RMIServerImpl();
Naming.rebind("SAMPLE-SERVER", Server);
System.out.println("Server waiting.....");
} catch (java.net.MalformedURLException mue) {
System.out.println("Malformed URL: " + mue.toString());
} catch (RemoteException re) {
System.out.println("Remote exception: " + re.toString());
}
}
}
Sounds like you didn't run the rmic compiler to generate stubs and skeletons.
It's been so long since I've done raw RMI by hand that I don't know if that step is still required. But it was the last time I did RMI.
If you did run rmic, then I'd guess that you didn't package the stub and skeleton properly with the server and client sides. If you can find those .class files, check your packaging and deployment.