Unable to configure SSL for Kafka Connect REST API - rest

I'm trying to configure SSL for Kafka Connect REST API (2.11-2.1.0).
The problem
I tried two configurations (worker config):
with listeners.https. prefix
listeners=https://localhost:9000
listeners.https.ssl.keystore.location=/mypath/keystore.jks
listeners.https.ssl.keystore.password=mypassword
listeners.https.ssl.key.password=mypassword
and without listeners.https. prefix
listeners=https://localhost:9000
ssl.keystore.location=/mypath/keystore.jks
ssl.keystore.password=mypassword
ssl.key.password=mypassword
Both configurations starts OK, and show following exception when trying to connect to https://localhost:9000 :
javax.net.ssl.SSLHandshakeException: no cipher suites in common
In log, I see that SslContextFactory was created with any keystore, but with ciphers:
210824 ssl.SslContextFactory:350 DEBUG: Selected Protocols [TLSv1.2, TLSv1.1, TLSv1] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
210824 ssl.SslContextFactory:351 DEBUG: Selected Ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, ...]
210824 component.AbstractLifeCycle:177 DEBUG: STARTED #10431ms SslContextFactory#42f8285e[provider=null,keyStore=null,trustStore=null]
What I did
As I know that password from keystore is absolutely correct, I digged into source code, and started to debug.
Finally, I find out that neither plain ssl.* nor prefixed listeners.https.ssl.* configurations are not taken into account, and it turns that there is not possibility to configure SSL for Kafka Connect REST API currently.
Call sequence is:
RestServer.createConnector
SSLUtils.createSslContextFactory
AbstractConfig.valuesWithPrefixAllOrNothing
Last method is the reason of troubles.
If we have listeners.https. properties, they cannot be returned, because they filtered out at line 254 (since WorkerConfig contains no properties with the prefix).
Otherwise, if we have unprefixed ssl. properties, they also not returned, because values field contains only known properties from the same WorkerConfig (values are result of ConfigDef.parse).
Am I missing something, and has anyone successfully configured SSL for kafka connect rest api ?

Try export KAFKA_OPTS=-Djava.security.auth.login.config=/apps/kafka/conf/kafka/kf_jaas.conf where kf_jaas.conf contains ZooKeeper client authentication

I haven't test Connect REST API, but KafkaTemplate send and recieves messages with ssl.
From your configuration i may assume two problems:
you not specified the truststore (for certificate chain check)
you used absolute path, but spring keystore-location interprets as
relative to /webapp
I tried test application from examples:
https://memorynotfound.com/spring-kafka-and-spring-boot-configuration-example/
and
https://gist.github.com/itzg/e3ebfd7aec220bf0522e23a65b1296c8
Tested with springboot 2.0.4.RELEASE, used kafka library
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
and this my application.properties content:
spring.application.name=my-stream-app
spring.kafka.bootstrap-servers=localhost:9093
spring.kafka.ssl.truststore-location=kafka.server.truststore.jks
spring.kafka.ssl.truststore-password=123456
spring.kafka.ssl.keystore-location=kafka.server.keystore.jks
spring.kafka.ssl.keystore-password=123456
spring.kafka.ssl.key-password=123456
spring.kafka.properties.security.protocol=SSL
spring.kafka.consumer.group-id=properties test-consumer-group
app.topic.foo=test
fragment of kafka server configuration:
listeners=SSL://localhost:9093
ssl.truststore.location=/home/legioner/kafka.server.truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=/home/legioner/kafka.server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456

Related

Kafka Connect 5.5.0 - Unable to reset max.request.size

In confluent-5.5.0 - I am unable to change the max.request.size , which always defaults to max.request.size = 1048576 in the ProducerConfig.
The following are the parameters I have already tried with noluck:
confluent-5.5.0/etc/kafka/producer.properties
max.request.size=15728640
producer.max.request.size=15728640
confluent-5.5.0/etc/kafka/server.properties
message.max.bytes=15728640
replica.fetch.max.bytes=15728640
max.request.size=15728640
fetch.message.max.bytes=15728640
/data/confluent-5.5.0/etc/kafka/consumer.properties
max.partition.fetch.bytes=15728640
confluent-5.5.0/etc/kafka-rest/kafka-rest.properties
max.request.size=15728640
NOTE : None of these values is getting updated in the connect.log
I have stop/started confluent-5.5.0 , even destroyed the previous images and restarted.
Am i missing something ?
The following i have also tried after the information from comment :
/data/confluent-5.5.0/etc/kafka/connect-standalone.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
/data/confluent-5.5.0/etc/kafka/connect-distributed.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
Still in the max.request.size has not got changed.
( Solved )Based on the inputs :
I have added the above configuration in the connect
or configuration. And also changed the policy from none to ALL. Which applied the configuration changes properly.
Those files are not used by Connect.
server is for the Apache Kafka Broker only
consumer|producer are for the kafka-console utilities
kafka-rest is for the Confluent REST Proxy only
You need to use connect-distributed.properties or connect-standalone.properties and notice that you need to additionally set the property correctly using prefixes.
the solution is to set configuration in kafka connect proprties file :
add the following in distributed or standalone connect properties file
producer.max.request.size=157286400
consumer.max.request.size=157286400
max.request.size=157286400
and it will work !!!

How to access a confluent schema registry server secured with a password using Spring cloud stream?

I'm using spring cloud stream alongside Aiven's schema registry which uses confluent's schema registry. Aiven's schema registry is secured with a password. Based on these instructions, these two config parameters need to be set to successfully access the schema registry server.
props.put("basic.auth.credentials.source", "USER_INFO");
props.put("basic.auth.user.info", "avnadmin:schema-reg-password");
Everything is fine when I only use vanilla java's kafka drivers, but if I use Spring cloud stream, I don't know how to inject these two parameters. At the moment, I'm putting "basic.auth.user.info" and "basic.auth.credentials.source" under "spring.cloud.stream.kafka.binder.configuration" in the application.yml file.
Doing this, I'm getting "401 Unauthorized" on the line where the schema wants to get registered.
Update 1:
Based on 'Ali n's suggestion, I updated the way SchemaRegistryClient's bean was configured so that it becomes aware of the SSL context.
#Bean
public SchemaRegistryClient schemaRegistryClient(
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") String endpoint) {
try {
final KeyStore keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream(
new File("path/to/client.keystore.p12")),
"secret".toCharArray());
final KeyStore trustStore = KeyStore.getInstance("JKS");
trustStore.load(new FileInputStream(
new File("path/to/client.truststore.jks")),
"secret".toCharArray());
TrustStrategy acceptingTrustStrategy = (X509Certificate[] chain, String authType) -> true;
SSLContext sslContext = SSLContextBuilder
.create()
.loadKeyMaterial(keyStore, "secret".toCharArray())
.loadTrustMaterial(trustStore, acceptingTrustStrategy)
.build();
HttpClient httpClient = HttpClients.custom().setSSLContext(sslContext).build();
ClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory(
httpClient);
ConfluentSchemaRegistryClient schemaRegistryClient = new ConfluentSchemaRegistryClient(
new RestTemplate(requestFactory));
schemaRegistryClient.setEndpoint(endpoint);
return schemaRegistryClient;
} catch (Exception ex) {
ex.printStackTrace();
return null;
}
}
This helped getting rid of the error on app's startup and registered the schema. However, whenever the app wanted to push a message to Kafka, a new error was thrown again. Finally this was also fixed by mmelsen's answer.
I ran into the same problem as the situation I was in was to connect to a secured schema registry hosted by aiven and secured by basic auth. In order for me to make it work I had to configure the following properties:
spring.kafka.properties.schema.registry.url=https://***.aiven***.com:port
spring.kafka.properties.basic.auth.credentials.source=USER_INFO
spring.kafka.properties.basic.auth.user.info=username:password
the other properties for my binder are:
spring.cloud.stream.binders.input.type=kafka
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.brokers=https://***.aiven***.com:port <-- different from the before mentioned port
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.security.protocol=SSL
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location=truststore.jks
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.truststore.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.type=PKCS12
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.location=clientkeystore.p12
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.key.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.streams.binder.autoCreateTopics=false
what actually happens is that Spring cloud stream will add the spring.kafka.properties.basic* to the DefaultKafkaConsumerFactory and that will add the config to the KafkaConsumer. At some point during the initialization of the spring kafka, a CachedSchemaRegistryClient is created that is provisioned with these properties. This Client contains a method called configureRestService that will check if a map of properties contains "basic.auth.credentials.source". As we provide this through the spring.kafka.properties it will find this property and will take care of creating the appropriate headers when accessing the schema registry's endpoint.
hope this works out for you as well.
I'm using spring cloud version Greenwich.SR1, spring-boot-starter 2.1.4.RELEASE, avro-version 1.8.2 and confluent.version 5.2.1
The binder configuration only handles well-known consumer and producer properties.
You can set arbitrary properties at the binding level.
spring.cloud.stream.kafka.binding.<binding>.consumer.configuration.basic.auth...
Since Aiven uses SSL for Kafka security protocol, it is required to use certificates for the authentication.
You can follow this page to understand how it works. In the nutshell, you need to run the following command to generate certificates and import them:
openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key
keytool -import -file ca.pem -alias CA -keystore client.truststore.jks
Then you can use the following properties to make use of the certificates:
spring.cloud.stream.kafka.streams.binder:
configuration:
security.protocol: SSL
ssl.truststore.location: client.truststore.jks
ssl.truststore.password: secret
ssl.keystore.type: PKCS12
ssl.keystore.location: client.keystore.p12
ssl.keystore.password: secret
ssl.key.password: secret
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer

Kafka Server - Could not find a 'KafkaServer' in JAAS

I have a standalone kafka broker that I'm trying to configure SASL for. Configurations are below. I'm trying to set up SASL_PLAIN authentication on the broker.
My understanding is that with the listener.name... configuration in the server.properties, I shouldn't need the jaas file. But I've experimented with one to see if that might be a better approach.
I have experimented with each of these commands, but both result in the same exception.
sudo bin/kafka-server-start etc/kafka/server.properties
sudo -Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf bin/kafka-server-start etc/kafka/server.properties
the exception displayed is:
Fatal error during KafkaServer startup. Prepare to shutdown... Could
not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the
JAAS configuration. System property 'java.security.auth.login.config'
is not set
server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT
listener.name.SASL_PLAINTEXT.plain.sasl.jaas.config:
org.apache.kafka.common.security.plain.PlainLoginModule required /
username="username" /
password="Password" /
user_username="Password";
advertised.listeners=SASL_PLAINTEXT://[ipaddress]:9092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
secutiy.inter.broker.protocol=SASL_PLAINTEXT
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="username"
password="Password"
user_username="Password";
};
I've spent a day looking at this already - has anyone else had experience with this problem?
You need to export a variable, not in-line the config to kafka-server-start (or sudo).
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
bin/kafka-server-start /path/to/server.properties
Ref. Confluent's sections on Kafka security
Putting my mistakes here for austerity:
Don't do your startup commands from the cli, put them in a .sh file and run from there:
For example, something like this:
zkstart
export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
kafkastart
export KAFKA_OPTS=-Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf
bin/kafka-server-start etc/kafka/server.properties
If you still encounter an error related to the configs, check your _jaas files to ensure all the configuration sections in the error messages are present. If they are, it's likely the format isn't quite correct - check for the two semi-colons in each section and if that fails, try recreating the file entirely from scratch (or from a c&p from the documentation).
edit
So, the final solution for me was to add the export.... lines to the beginning of the corresponding kafka-server-start and zookeeper-server-start files. It took me a while before the 'everything is a file' finally clicked and I realized the script files were the actual basis for the services.

Restcomm - Solving SMSC GW 7.2 configuration failures

We configured the latest version (7.2) SMSC-GW to work on on our server with the environment (cassandra and such). However, after setting up everything. Some failures are appearing (which did not appear in previous versions).
Firstly, when connecting the simulators and the gateway using the default settings (JSS7 <-> SMSCGW <-> SMPP)
JSS7 is connected and sending, but no response is received.
SMPP is connected to SMSC-GW and the EMSE is bound. SMPP tries to send to SS7 but receives a response PDU packet failure from the SMSC-GW
I tried configuring DB routing rules, but that did not work.
Also, the log in the SMSC-GW server is frequently displaying the following message:
16:00:28,504 INFO [SchedulerResourceAdaptor] (pool-56-thread-1) Not all SBB are running now: ServicesDownList=[smscTxSmppServerServiceState, smscRxSmppServerServiceState, smscTxSipServerServiceState, smscRxSipServerServiceState, smscTxHttpServerServiceState, moServiceState, homeRoutingServiceState, mtServiceState, alertServiceState, chargingServiceState, ]
And the JSS7 management console GUI is displaying this (which looks wrong):
So are these the source of the SMSC-GW failures?
UPDATE: I found this error in the server.log
2017-02-02 10:57:42,005 WARN [org.mobicents.slee.container.deployment.jboss.SleeContainerDeployerImpl] (SLEE-InternalDeployer-thread-1) SLEE DUs not deployed, due to missing dependencies: file:/home/coreteam/kitchensink/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/simulator/deploy/smsc-services-du-7.2.109.jar/
Followed by:
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_MT,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=PersistenceResourceAdaptorType,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=SchedulerResourceAdaptorType,vendor=org.mobicents,version=1.0]
SipRA
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_RSDS,vendor=org.mobicents,version=1.0]
SchedulerResourceAdaptor^M
PersistenceResourceAdaptor^M
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SMPP_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SIP_SM,vendor=org.mobicents,version=1.0]
2017-02-02 14:41:17,450 WARN [org.mobicents.slee.container.deployment.jboss.DeploymentManager] (main) Unable to INSTALL smsc-services-du-7.3.0-SNAPSHOT.jar right now. Waiting for dependencies to be resolved.
Solved it quite a while ago, but thought I would share. I just simply installed the SipRA missing dependency by adding the following in the deploy-config.xml file:
<ra-entity
resource-adaptor-id="ResourceAdaptorID[name=JainSipResourceAdaptor,vendor=net.java.slee.sip,version=1.2]"
entity-name="SipRA">
<properties>
<property name="javax.sip.PORT" type="java.lang.Integer" value="5060" />
</properties>
<ra-link name="SipRA" />
In the $JBOSS_HOME/server/profile_name/deploy/restcomm-slee directory.
I set the port to some other value since that number was already taken by some other service.
The smsc-services-du-7.2.109.jar then installed automatically the next time I ran the SMSC-GW.

Spring Config Server and Client embedded in the same application

I understand that the config server needs to be up to bootstrap any of its clients.
Is there any way we can embed both Spring config Server and client in the same application so that each application could protect its sensitive information easily? If it is possible, I will use server for reading properties from the git and client for the decryption.
This flow works with me:
Step 1: Use only Config Server dependency:
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
Step 2: Enable Config Server via#EnableConfigServer annotation.
Step 3: In bootstrap.yml file (not application.yml), we will configure:
spring:
cloud:
config:
server:
# This flag indicates that the server should configure itself from its own remote repository
bootstrap: true
git:
uri: https://github.com/your-git-account/your-config-repository
username: user
password: secret
searchPaths: foo,bar*
timeout: 10
# prefix string to avoid conflicting with context/server path of application
prefix: config
Step 4: Remove all configurations related to Config Server in application.yml file.
If I understand you correctly, all that you'd need to do is add the #EnableConfigServer annotation and set spring.cloud.config.server.bootstrap=true. For detail see Embedding the Config Server and spring-cloud-config issue 100.