Passing JAAS config to bitnami/kafka helm chart for SASL/PLAIN - kubernetes

Goal:
I want to use bitnami/kafka helm chart with SASL enabled with the PLAIN mechanism for only the external client. (client-broker, broker-broker, broker-zookeeper connection can be in PLAINTEXT mechanism)
What I have Done:
I've set configured parameters in values.yaml file:
superUsers: User:adminuser
auth.externalClientProtocol: sasl
auth.sasl.jaas.clientUsers:
- adminuser
- otheruser
auth.sasl.jaas.clientPasswords:
- adminuserpass
- otheruserpass
auth.sasl.jaas.interBrokerUser: adminuser
And left other parameters as it is. But it doesn't seem to be enough. The broker container is going to backOff state when I try to install the chart.
Question#1: Aren't these configuration parameters enough for setting up what I'm trying to achieve? Won't these create a JAAS config file for me?
From Kafka documentation Kafka_SASL, I have to pass a JAAS config file for the broker. It can be done by sasl.jaas.config configuration parameter. For me it should be something like this:
listener.name.EXTERNAL.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="adminuser" \
password="adminuserpass" \
user_adminuser="adminuserpass" \
user_otheruser="otheruserpass";
But it doesn't seem there is any sasl.jaas.config available in bitnami/kafka.values.yaml.
Question#2: How can I pass this JAAS config file values if the answer for question#1 is NO? Should I use config or extraEnvVars for this?
Thanks!

This is work for me:
...
authorizerClassName: "kafka.security.authorizer.AclAuthorizer"
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: plaintext
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- yourusername
clientPasswords:
- yourpassword
....
I don't know why but I have to set clientProtocol: sasl
otherwise, I get java.io.IOException: /opt/bitnami/kafka/conf/kafka_jaas.conf (No such file or directory)
also, I didn't see the error till I set image.debug: true
Note: as u you probably see this will force authentication also between the clients inside the cluster

Related

I cannot log in the Chainlink GUI

I am using this helm chart
https://artifacthub.io/packages/helm/vulcanlink/chainlink
I managed to launch and connect Chainlink node with Postgres, with these values
config:
# Login Info
ROOT: /chainlink
API_LOGIN: |
API_EMAIL=admin#admin.com
API_LOGIN=admin
WALLET_PASSWORD: "9xMR9PN7CTk6Axs" # a random test password based on chainlink's demands
# HTTP Security
ALLOW_ORIGINS: "*"
SECURE_COOKIES: "false"
CHAINLINK_PORT: "6688"
CHAINLINK_TLS_PORT: "0"
# Database
DATABASE_TIMEOUT: "0"
DATABASE_URL: postgresql://chainlink:chainlink#pgdb-postgresql:5432/chainlink?sslmode=disable
# Ethereum
ETH_URL: wss://rinkeby.infura.io/ws/v3/somerandomnumber # ws://geth:8546
ETH_CHAIN_ID: "4"
LINK_CONTRACT_ADDRESS: 0x514910771af9ca656af840dff83e8264ecf986ca # this was here ...
I port forward the k8s service and I see the Chainlink UI.
But what combination of the above should I use?
I have tried them all.
EDIT
In order to change the env vars, I ended up destroying the whole minikube env. Insane, and I have no idea why...
Now I get this in the logs
There are no accounts, creating a new account with the specified password
There are no P2P keys; creating a new key encrypted with given password
There are no OCR keys; creating a new key encrypted with given password
2022-09-02T10:22:50Z [INFO] API exposed for user API_EMAIL=admin#admin.com cmd/local_client.go:122
2022-09-02T10:23:32Z [INFO] POST /sessions web/router.go:433 body={"email":"admin#admin.com","password":"*REDACTED*"} clientIP=127.0.0.1 errors=Error #01: Invalid email
latency=4.918708ms method=POST path=/sessions servedAt=2022-09-02 10:23:32 status=401
... so I still cannot log in in the GUI. It is frustrating
EDIT
This is what happens when the instructions are not clear...
The username was API_EMAIL=admin#admin.com and the password API_LOGIN=admin .
Now I can login...but surely gonna change them...

Kafka Connect 5.5.0 - Unable to reset max.request.size

In confluent-5.5.0 - I am unable to change the max.request.size , which always defaults to max.request.size = 1048576 in the ProducerConfig.
The following are the parameters I have already tried with noluck:
confluent-5.5.0/etc/kafka/producer.properties
max.request.size=15728640
producer.max.request.size=15728640
confluent-5.5.0/etc/kafka/server.properties
message.max.bytes=15728640
replica.fetch.max.bytes=15728640
max.request.size=15728640
fetch.message.max.bytes=15728640
/data/confluent-5.5.0/etc/kafka/consumer.properties
max.partition.fetch.bytes=15728640
confluent-5.5.0/etc/kafka-rest/kafka-rest.properties
max.request.size=15728640
NOTE : None of these values is getting updated in the connect.log
I have stop/started confluent-5.5.0 , even destroyed the previous images and restarted.
Am i missing something ?
The following i have also tried after the information from comment :
/data/confluent-5.5.0/etc/kafka/connect-standalone.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
/data/confluent-5.5.0/etc/kafka/connect-distributed.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
Still in the max.request.size has not got changed.
( Solved )Based on the inputs :
I have added the above configuration in the connect
or configuration. And also changed the policy from none to ALL. Which applied the configuration changes properly.
Those files are not used by Connect.
server is for the Apache Kafka Broker only
consumer|producer are for the kafka-console utilities
kafka-rest is for the Confluent REST Proxy only
You need to use connect-distributed.properties or connect-standalone.properties and notice that you need to additionally set the property correctly using prefixes.
the solution is to set configuration in kafka connect proprties file :
add the following in distributed or standalone connect properties file
producer.max.request.size=157286400
consumer.max.request.size=157286400
max.request.size=157286400
and it will work !!!

Unable to configure SSL for Kafka Connect REST API

I'm trying to configure SSL for Kafka Connect REST API (2.11-2.1.0).
The problem
I tried two configurations (worker config):
with listeners.https. prefix
listeners=https://localhost:9000
listeners.https.ssl.keystore.location=/mypath/keystore.jks
listeners.https.ssl.keystore.password=mypassword
listeners.https.ssl.key.password=mypassword
and without listeners.https. prefix
listeners=https://localhost:9000
ssl.keystore.location=/mypath/keystore.jks
ssl.keystore.password=mypassword
ssl.key.password=mypassword
Both configurations starts OK, and show following exception when trying to connect to https://localhost:9000 :
javax.net.ssl.SSLHandshakeException: no cipher suites in common
In log, I see that SslContextFactory was created with any keystore, but with ciphers:
210824 ssl.SslContextFactory:350 DEBUG: Selected Protocols [TLSv1.2, TLSv1.1, TLSv1] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
210824 ssl.SslContextFactory:351 DEBUG: Selected Ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, ...]
210824 component.AbstractLifeCycle:177 DEBUG: STARTED #10431ms SslContextFactory#42f8285e[provider=null,keyStore=null,trustStore=null]
What I did
As I know that password from keystore is absolutely correct, I digged into source code, and started to debug.
Finally, I find out that neither plain ssl.* nor prefixed listeners.https.ssl.* configurations are not taken into account, and it turns that there is not possibility to configure SSL for Kafka Connect REST API currently.
Call sequence is:
RestServer.createConnector
SSLUtils.createSslContextFactory
AbstractConfig.valuesWithPrefixAllOrNothing
Last method is the reason of troubles.
If we have listeners.https. properties, they cannot be returned, because they filtered out at line 254 (since WorkerConfig contains no properties with the prefix).
Otherwise, if we have unprefixed ssl. properties, they also not returned, because values field contains only known properties from the same WorkerConfig (values are result of ConfigDef.parse).
Am I missing something, and has anyone successfully configured SSL for kafka connect rest api ?
Try export KAFKA_OPTS=-Djava.security.auth.login.config=/apps/kafka/conf/kafka/kf_jaas.conf where kf_jaas.conf contains ZooKeeper client authentication
I haven't test Connect REST API, but KafkaTemplate send and recieves messages with ssl.
From your configuration i may assume two problems:
you not specified the truststore (for certificate chain check)
you used absolute path, but spring keystore-location interprets as
relative to /webapp
I tried test application from examples:
https://memorynotfound.com/spring-kafka-and-spring-boot-configuration-example/
and
https://gist.github.com/itzg/e3ebfd7aec220bf0522e23a65b1296c8
Tested with springboot 2.0.4.RELEASE, used kafka library
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
and this my application.properties content:
spring.application.name=my-stream-app
spring.kafka.bootstrap-servers=localhost:9093
spring.kafka.ssl.truststore-location=kafka.server.truststore.jks
spring.kafka.ssl.truststore-password=123456
spring.kafka.ssl.keystore-location=kafka.server.keystore.jks
spring.kafka.ssl.keystore-password=123456
spring.kafka.ssl.key-password=123456
spring.kafka.properties.security.protocol=SSL
spring.kafka.consumer.group-id=properties test-consumer-group
app.topic.foo=test
fragment of kafka server configuration:
listeners=SSL://localhost:9093
ssl.truststore.location=/home/legioner/kafka.server.truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=/home/legioner/kafka.server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456

Kafka Server - Could not find a 'KafkaServer' in JAAS

I have a standalone kafka broker that I'm trying to configure SASL for. Configurations are below. I'm trying to set up SASL_PLAIN authentication on the broker.
My understanding is that with the listener.name... configuration in the server.properties, I shouldn't need the jaas file. But I've experimented with one to see if that might be a better approach.
I have experimented with each of these commands, but both result in the same exception.
sudo bin/kafka-server-start etc/kafka/server.properties
sudo -Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf bin/kafka-server-start etc/kafka/server.properties
the exception displayed is:
Fatal error during KafkaServer startup. Prepare to shutdown... Could
not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the
JAAS configuration. System property 'java.security.auth.login.config'
is not set
server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT
listener.name.SASL_PLAINTEXT.plain.sasl.jaas.config:
org.apache.kafka.common.security.plain.PlainLoginModule required /
username="username" /
password="Password" /
user_username="Password";
advertised.listeners=SASL_PLAINTEXT://[ipaddress]:9092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
secutiy.inter.broker.protocol=SASL_PLAINTEXT
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="username"
password="Password"
user_username="Password";
};
I've spent a day looking at this already - has anyone else had experience with this problem?
You need to export a variable, not in-line the config to kafka-server-start (or sudo).
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
bin/kafka-server-start /path/to/server.properties
Ref. Confluent's sections on Kafka security
Putting my mistakes here for austerity:
Don't do your startup commands from the cli, put them in a .sh file and run from there:
For example, something like this:
zkstart
export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
kafkastart
export KAFKA_OPTS=-Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf
bin/kafka-server-start etc/kafka/server.properties
If you still encounter an error related to the configs, check your _jaas files to ensure all the configuration sections in the error messages are present. If they are, it's likely the format isn't quite correct - check for the two semi-colons in each section and if that fails, try recreating the file entirely from scratch (or from a c&p from the documentation).
edit
So, the final solution for me was to add the export.... lines to the beginning of the corresponding kafka-server-start and zookeeper-server-start files. It took me a while before the 'everything is a file' finally clicked and I realized the script files were the actual basis for the services.

Spinnaker "Create Application" menu doesn't load

I'm quite new to the Spinnaker and have to ask for some help I guess. Does anyone knows why it could be that I can't create any Application and just keep seeing this screen.
My installation is through Halyard 1.5.0 and Ubuntu 14.04.
We don't use any cloud provider but I did configure Docker and Kubernetes part
And here is the error I see in the /var/log/spinnaker/echo/echo.log:
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : java.net.SocketTimeoutException: timeout
at okio.Okio$3.newTimeoutException(Okio.java:207)
at okio.AsyncTimeout.exit(AsyncTimeout.java:261)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:215)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:306)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:300)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:196)
at com.squareup.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:186)
at com.squareup.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127)
at com.squareup.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:739)
at com.squareup.okhttp.internal.http.HttpEngine.access$200(HttpEngine.java:87)
at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:724)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:578)
at com.squareup.okhttp.Call.getResponse(Call.java:287)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at retrofit.client.OkClient.execute(OkClient.java:53)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:326)
at retrofit.RestAdapter$RestHandler.access$100(RestAdapter.java:220)
at retrofit.RestAdapter$RestHandler$1.invoke(RestAdapter.java:265)
at retrofit.RxSupport$2.run(RxSupport.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at retrofit.Platform$Base$2$1.run(Platform.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:139)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
... 24 more
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : ---- END ERROR
#grizzthedj
thanks again for recommendations. It doesn't seem, however, solved the issue. I wonder if it has something to do with my Docker Registry or Kubernetes.
Here is what I have in my .hal/config:
dockerRegistry:
enabled: true
accounts:
- name: <hidden-name>
requiredGroupMembership: []
address: https://docker-registry.<hidden-name>.net/
cacheIntervalSeconds: 30
repositories:
- hellopod
- demoapp
primaryAccount: <hidden-name>
kubernetes:
enabled: true
accounts:
- name: <username>
requiredGroupMembership: []
dockerRegistries:
- accountName: <hidden-name>
namespaces: []
context: sre-os1-dev
namespaces:
- spinnaker
omitNamespaces: []
kubeconfigFile: /home/<username>/.kube/config
I suspect you may be using redis as the persistent storage type(I ran into the same issue).
If this is the case, persistent storage using redis doesn't seem to be working properly out-of-the-box, and it is not supported. I would try using an S3 target, if available.
More info here on support for redis
To configure S3 using Halyard, use the following commands:
echo <SECRET_ACCESS_KEY> | hal config storage s3 edit --access-key-id <ACCESS_KEY_ID> --endpoint <S3_ENDPOINT> --bucket <BUCKET_NAME> --root-folder spinnaker --secret-access-key
hal config storage edit --type s3
hal deploy apply
#grizzthedj,
Here is what I've found inside front50.log (I wiped out ID's of course for security reasons)
You may be right.
2017-11-20 12:40:29.151 INFO 682 --- [0.0-8080-exec-1] com.amazonaws.latency : ServiceName=[Amazon S3], AWSErrorCode=[NoSuchKey], StatusCode=[404], ServiceEndpoint=[https://s3-us-west-2.amazonaws.com], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: ...; S3 Extended Request ID: ...), S3 Extended Request ID: ...], RequestType=[GetObjectRequest], AWSRequestID=[...], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[39.634], HttpClientSendRequestTime=[0.072], HttpRequestTime=[39.213], RequestSigningTime=[0.067], CredentialsRequestTime=[0.001, 0.0], HttpClientReceiveResponseTime=[39.059],
I had a similar issue on kubernetes/aws, when I opened up the chrome dev console I was getting lots of 404 errors trying to connect to localhost:8084, I had to reconfigure the deck and gate baseurls. This is what I did using halyard:
hal config security ui edit --override-base-url http://<deck-loadbalancer-dns-entry>:9000
hal config security api edit --override-base-url http://<gate-loadbalancer-dns-entry>:8084
i did hal deploy apply and when it came back I noticed the developer console was throwing cors errors so I had to do the following.
echo "host: 0.0.0.0" | tee \ ~/.hal/default/service-settings/gate.yml \ ~/.hal/default/service-settings/deck.yml
You may note the lack of TLS and cors config, this is a test system so make better choices in production :)