I have recently updated my micro-services project from Spring boot 2.0.5 to 2.7.4.
Also, updated Spring cloud dependencies to 2021.0.4.
Miragrate from Netflix Zuul to Spring cloud Gateway.
My Eureka Server is running on PORT 8000, which I can access on browser and see the Eureka Dashboard.
Gateway is running on port 8001, which registers successfully with Eureka, and I can see it's status "UP" in the Dashboard.
But no other service are registered on Eureka. The application.yml file of all other services are almost similar to the gateway-service. Below are some code snippets of one of the services :
Problem 1 - Application name UNKNOWN
DiscoveryClient_UNKNOWN/192.168.1.3: registering service...
Problem 2
Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=java.net.ConnectException: Connection refused (Connection refused) stacktrace=com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused (Connection refused)
Eureka-server config
spring:
application:
name: eureka-service
cloud:
config:
uri: ${eureka.instance.hostname}:${server.port}
server:
port: 8000
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
service-url:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka
server:
enableSelfPreservation: false
logging:
level:
org.springframework: ERROR
com.debugfactory.trypz: ERROR
file:
name: logs/eureka-service.log
Gateway-service Application.yml - This registers with proper name with Eureka
spring:
main:
web-application-type: reactive
allow-bean-definition-overriding: true
application:
name: zuul-service
servlet:
multipart:
max-file-size: 10MB
max-request-size: 10MB
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: sms-service
uri: http://sms-service/
predicates:
- Path=/sms/**
server:
port: 8001
max-thread: 100
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true
logging:
file:
name: logs/zuul-service.log
pattern:
file: "%d %-5level [%thread] %logger{0} : %msg%n"
level:
org.springframework: INFO
com.bayview: DEBUG
org.netflix: INFO
SMS-Service <- This tries to look for Eureka port on 8761 with name UNKNOWN
Application.yml
spring:
application:
name: sms-service
data:
mongodb:
authentication-database: admin
database: sms
host: localhost
port: 27017
main:
allow-bean-definition-overriding: true
config:
activate:
on-profile: local
server:
port: 8022
servlet:
context-path: /sms
tomcat:
max-threads: 10
logging:
level:
org.springframework.web: ERROR
org.hibernate: ERROR
com.cryptified.exchange: DEBUG
org.springframework.data: ERROR
org.springframework.security: ERROR
org.springframework.web.client: DEBUG
file:
name: logs/sms-service.log
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true
gradle
plugins {
id 'org.springframework.boot' version '2.7.5'
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
id 'java'
}
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "2021.0.4")
set('testcontainersVersion', "1.17.4")
mapStructVersion = '1.5.3.Final'
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.flywaydb:flyway-core'
implementation 'org.springframework.cloud:spring-cloud-starter-config'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
implementation 'org.springframework.boot:spring-boot-starter-aop'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-validation'
implementation "org.mapstruct:mapstruct:${mapStructVersion}"
annotationProcessor "org.mapstruct:mapstruct-processor:${mapStructVersion}"
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'org.postgresql:postgresql'
annotationProcessor 'org.springframework.boot:spring-boot-configuration-processor'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.testcontainers:junit-jupiter'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
.....
application.yml in your project
URI_TO_SERVER_CONFIG: http://localhost:8888
spring:
application:
name: name_project_your
cloud:
config:
uri: ${URI_TO_SERVER_CONFIG}
config:
import: configserver:${URI_TO_SERVER_CONFIG}
profiles:
active: microservice, discoveries, name_project_your
Application configs on git-repository
application-discoveries.yml
SERVER_PORT_SERVICE_DISCOVERY: 8761
IP_ADDRESS_SERVICE_DISCOVERY: http://localhost
eureka:
instance:
instance-id: ${spring.application.name}:${random.uuid}
# https://docs.spring.io/spring-cloud-netflix/docs/current/reference/html/#changing-the-eureka-instance-id
client:
service-url:
defaultZone: ${IP_ADDRESS_SERVICE_DISCOVERY}:${SERVER_PORT_SERVICE_DISCOVERY}/eureka
lease-renewal-interval-in-seconds: 5
name-project-your.yml
DB_IP: 127.0.0.1
DB_PORT: 5432
DB_DATABASE_NAME: name_db
DB_URL: jdbc:postgresql://${DB_IP}:${DB_PORT}/${DB_DATABASE_NAME}
DB_USERNAME: postgres
DB_PASSWORD: postgres
DB_SCHEMA: your_schema
spring:
application:
name: name-project-your
cloud:
config:
enabled: true
discovery:
enabled: true
datasource:
url: ${DB_URL}
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
flyway:
baseline-on-migrate: true
locations: classpath:db/migration
create-schemas: true
default-schema: ${DB_SCHEMA}
application-microservice.yml
eureka:
client:
fetch-registry: true
register-with-eureka: true
logging:
level:
com:
netflix:
eureka: debug
discovery: debug
server:
port: 0
I'm trying to make use of Schema Registry Transfer SMT plugin but I get following error.
PUT /connectors/kafka-connector-sb-16-sb-16/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 2 error(s):
Invalid value cricket.jmoore.kafka.connect.transforms.SchemaRegistryTransfer for configuration transforms.AvroSchemaTransfer.type: Class cricket.jmoore.kafka.connect.transforms.SchemaRegistryTransfer could not be found.
Invalid value null for configuration transforms.AvroSchemaTransfer.type: Not a Transformation
You can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`
I use strimzi operator to manage my kafka objects in kubernetes.
Kafka Connect config:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: {{.Release.Name}}-{{.Values.ENVNAME}}
annotations:
strimzi.io/use-connector-resources: "true"
labels:
app: {{.Release.Name}}
# owner: {{ .Values.owner | default "NotSet" }}
# branch: {{ .Values.branch | default "NotSet" | trunc 56 | trimSuffix "-" }}
spec:
logging:
type: inline
loggers:
connect.root.logger.level: "DEBUG"
template:
pod:
imagePullSecrets:
- name: wecapacrdev001-azurecr-io-pull-secret
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "9404"
prometheus.io/scrape: "true"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
authentication:
type: scram-sha-512
username: {{.Values.ENVNAME}}
passwordSecret:
secretName: {{.Values.ENVNAME}}
password: password
bootstrapServers: {{.Values.ENVNAME}}-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: {{.Values.ENVNAME}}-cluster-ca-cert
certificate: ca.crt
config:
group.id: {{.Values.ENVNAME}}-connect-cluster
offset.storage.topic: {{.Values.ENVNAME}}-connect-cluster-offsets
config.storage.topic: {{.Values.ENVNAME}}-connect-cluster-configs
status.storage.topic: {{.Values.ENVNAME}}-connect-cluster-status
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
key.converter: org.apache.kafka.connect.converters.ByteArrayConverter
value.converter: org.apache.kafka.connect.converters.ByteArrayConverter
image: capcr.azurecr.io/devops/kafka:0.27.1-kafka-2.8.0-arm64-v2.7
jvmOptions:
-Xms: 3g
-Xmx: 3g
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 15
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
key: metrics-config.yml
name: {{.Release.Name}}-{{.Values.ENVNAME}}
readinessProbe:
initialDelaySeconds: 300
timeoutSeconds: 15
replicas: 3
resources:
limits:
cpu: 1000m
memory: 4Gi
requests:
cpu: 100m
memory: 3Gi
Connector config:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: {{.Release.Name}}-{{.Values.ENVNAME}}
labels:
strimzi.io/cluster: kafka-connect-cluster-{{.Values.ENVNAME}}
app: {{.Release.Name}}
# owner: {{ .Values.owner | default "NotSet" }}
# branch: {{ .Values.branch | default "NotSet" | trunc 56 | trimSuffix "-" }}
spec:
class: org.apache.kafka.connect.mirror.MirrorSourceConnector
config:
offset-syncs.topic.replication.factor: "3"
refresh.topics.interval.seconds: "60"
replication.factor: "3"
key.converter: org.apache.kafka.connect.converters.ByteArrayConverter
value.converter: org.apache.kafka.connect.converters.ByteArrayConverter
source.cluster.alias: SPLITTED
source.cluster.auto.commit.interval.ms: "3000"
source.cluster.auto.offset.reset: earliest
source.cluster.bootstrap.servers: {{.Values.ENVNAME}}-kafka-bootstrap:9093
source.cluster.fetch.max.bytes: "60502835"
source.cluster.group.id: {{.Values.ENVNAME}}-mirroring
source.cluster.max.poll.records: "100"
source.cluster.max.request.size: "60502835"
source.cluster.offset.storage.topic: {{.Values.ENVNAME}}-connect-cluster-offsets
source.cluster.config.storage.topic: {{.Values.ENVNAME}}-connect-cluster-configs
source.cluster.status.storage.topic: {{.Values.ENVNAME}}-connect-cluster-status
source.cluster.producer.compression.type: gzip
source.cluster.replica.fetch.max.bytes: "60502835"
source.cluster.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule
required username="{{.Values.ENVNAME}}" password="{{.Values.kafka.password}}";
source.cluster.sasl.mechanism: SCRAM-SHA-512
source.cluster.security.protocol: SASL_SSL
source.cluster.ssl.keystore.location: /opt/kafka/keystore/kafka_keystore.p12
source.cluster.ssl.keystore.password: password
source.cluster.ssl.keystore.type: PKCS12
source.cluster.ssl.truststore.location: /opt/kafka/keystore/kafka_keystore.p12
source.cluster.ssl.truststore.password: password
source.cluster.ssl.truststore.type: PKCS12
sync.topic.acls.enabled: "false"
target.cluster.alias: AGGREGATED
target.cluster.auto.commit.interval.ms: "3000"
target.cluster.auto.offset.reset: earliest
target.cluster.bootstrap.servers: {{.Values.ENVNAME}}-kafka-bootstrap:9093
target.cluster.fetch.max.bytes: "60502835"
target.cluster.group.id: {{.Values.ENVNAME}}-test
target.cluster.max.poll.records: "100"
target.cluster.max.request.size: "60502835"
target.cluster.offset.storage.topic: {{.Values.ENVNAME}}-connect-cluster-offsets
target.cluster.config.storage.topic: {{.Values.ENVNAME}}-connect-cluster-configs
target.cluster.status.storage.topic: {{.Values.ENVNAME}}-connect-cluster-status
target.cluster.producer.compression.type: gzip
target.cluster.replica.fetch.max.bytes: "60502835"
target.cluster.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule
required username="{{.Values.ENVNAME}}" password="{{.Values.kafka.password}}";
target.cluster.sasl.mechanism: SCRAM-SHA-512
target.cluster.security.protocol: SASL_SSL
target.cluster.ssl.keystore.location: /opt/kafka/keystore/kafka_keystore.p12
target.cluster.ssl.keystore.password: password
target.cluster.ssl.keystore.type: PKCS12
target.cluster.ssl.truststore.location: /opt/kafka/keystore/kafka_keystore.p12
target.cluster.ssl.truststore.password: password
target.cluster.ssl.truststore.type: PKCS12
tasks.max: "4"
topics: ^topic-to-be-replicated$
topics.blacklist: ""
transforms: AvroSchemaTransfer
transforms.AvroSchemaTransfer.dest.schema.registry.url: http://schemaregistry-{{.Values.ENVNAME}}-cp-schema-registry:8081
transforms.AvroSchemaTransfer.src.schema.registry.url: http://schemaregistry-{{.Values.ENVNAME}}-cp-schema-registry:8081
transforms.AvroSchemaTransfer.transfer.message.keys: "false"
transforms.AvroSchemaTransfer.type: cricket.jmoore.kafka.connect.transforms.SchemaRegistryTransfer
transforms.TopicRename.regex: (.*)
transforms.TopicRename.replacement: replica.$1
transforms.TopicRename.type: org.apache.kafka.connect.transforms.RegexRouter
tasksMax: 4
I tried to build the plugin and build the docker image like described here:
ARG STRIMZI_BASE_IMAGE=0.29.0-kafka-3.1.1-arm64
FROM quay.io/strimzi/kafka:$STRIMZI_BASE_IMAGE
USER root:root
COPY schema-registry-transfer-smt/target/schema-registry-transfer-smt-0.2.1.jar /opt/kafka/plugins/
USER 1001
I can see in logs that the plugin is loaded
INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/kafka/plugins/schema-registry-transfer-smt-0.2.1.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
INFO Loading plugin from: /opt/kafka/plugins/schema-registry-transfer-smt-0.2.1.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
DEBUG Loading plugin urls: [file:/opt/kafka/plugins/schema-registry-transfer-smt-0.2.1.jar] (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
However, when I execute kubectl describe KafkaConnect ..., I get following status which probably means the plugin is not loaded.
Status:
Conditions:
Last Transition Time: 2022-10-31T07:22:43.749290Z
Status: True
Type: Ready
Connector Plugins:
Class: org.apache.kafka.connect.file.FileStreamSinkConnector
Type: sink
Version: 2.8.0
Class: org.apache.kafka.connect.file.FileStreamSourceConnector
Type: source
Version: 2.8.0
Class: org.apache.kafka.connect.mirror.MirrorCheckpointConnector
Type: source
Version: 1
Class: org.apache.kafka.connect.mirror.MirrorHeartbeatConnector
Type: source
Version: 1
Class: org.apache.kafka.connect.mirror.MirrorSourceConnector
Type: source
Version: 1
The logs prints classpath jvm.classpath = ... and it doesn't "include schema-registry-transfer-smt-0.2.1.jar".
This look like configuration error. I tried to create and use fat jar, compile with java11, change scope of provided dependencies and many more. The repository's last commit was created in 2019. Does it make sense to bump all dependency versions? The error message doesn't suggest that.
I am trying to configure the Integration Response - Output Passthrough as No. I have configure the parameter PassthroughBehavior as NEVER, however it doesn´t change the output as desired.
Here is my method resource (in YAML):
apiresourcemsactualizartrxtransaccionesv1actualizarTransaccionesoptions:
Type: 'AWS::ApiGateway::Method'
Properties:
RestApiId: !Ref apigateway
ResourceId: !Ref apiresourcemsactualizartrxtransaccionesv1actualizarTransacciones
HttpMethod: OPTIONS
AuthorizationType: NONE
RequestParameters:
method.request.path.proxy: true
Integration:
PassthroughBehavior: NEVER
Type: MOCK
IntegrationHttpMethod: OPTIONS
IntegrationResponses:
-
StatusCode: 200
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Methods: "'OPTIONS,PATCH'"
method.response.header.Access-Control-Allow-Origin: "'*'"
MethodResponses:
-
StatusCode: 200
ResponseModels:
application/json: Empty
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: true
method.response.header.Access-Control-Allow-Methods: true
method.response.header.Access-Control-Allow-Origin: true
DependsOn: apiresourcemsactualizartrxtransaccionesv1actualizarTransacciones
Thank you very much for the help in advance.
My understanding is PassthroughBehavior refers to the request, not the output.
I think your output passthrough is setting to "Yes" because your response model is empty. If you set up a model/transform, that should change output passthrough to No.
getting this error: {"level":"fatal","msg":"failed to load config","error":"failed to unmarshal YAML config into config struct: 1 error(s) decoding:\n\n* '' has invalid keys: connect"}
with the folowing yaml:
kafka:
brokers:
- 192.168.12.12:9092
schemaRegistry:
enabled: true
urls:
- "http://192.168.12.12:8081"
connect:
enabled: true
clusters:
name: xy
url: "http://192.168.12.12:8091"
tls:
enabled: false
username: 1
password: 1
name: xya
url: http://192.168.12.12:8092
Try downgrade your image back to v1.5.0.
Seems that there's a mistake in master recently.
You could find all the images here
I have setup a Single broker instance of Kafka along with Zookeeper, Kafka-tools,Schema-registry and control-center.The setup is done using docker compose and using the Confluent provided images.Here is how the docker-compose looks like:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
broker:
image: confluentinc/cp-server:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: "true"
CONFLUENT_SUPPORT_CUSTOMER_ID: "anonymous"
kafka-tools:
image: confluentinc/cp-kafka:latest
hostname: kafka-tools
container_name: kafka-tools
command: ["tail", "-f", "/dev/null"]
network_mode: "host"
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
control-center:
image: confluentinc/cp-enterprise-control-center:latest
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
Trying to the confluent-kafka python client to create the Producer and Consumer application.The following is the code of my Kafka producer:
from datetime import datetime
import os
import json
from uuid import uuid4
from confluent_kafka import SerializingProducer
from confluent_kafka.serialization import StringSerializer
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka.schema_registry.json_schema import JSONSerializer
class BotExecutionProducer(object):
"""
Class represents the
Bot execution Stats
"""
def __init__(self,ticketId,accountId,executionTime,status):
self.ticketId = ticketId
self.accountId = accountId
self.executionTime = executionTime
self.timestamp = str(datetime.now())
self.status = status
def botexecution_to_dict(self,botexecution,ctx):
"""
Returns a Dict representation of the
KafkaBotExecution instance
botexecution : KafkaBotExecution instance
ctx: SerializaionContext
"""
return dict(ticketId=self.ticketId,
accountId=self.accountId,
executionTime=self.executionTime,
timestamp=self.timestamp,
status=self.status
)
def delivery_report(self,err, msg):
"""
Reports the failure or success of a message delivery.
"""
if err is not None:
print("Delivery failed for User record {}: {}".format(msg.key(), err))
return
print('User record {} successfully produced to {} [{}] at offset {}'.format(
msg.key(), msg.topic(), msg.partition(), msg.offset()))
def send(self):
"""
Will connect to Kafka Broker
validate and send the message
"""
topic = "bots.execution"
schema_str = """
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "BotExecutions",
"description": "BotExecution Stats",
"type": "object",
"properties": {
"ticketId": {
"description": "Ticket ID",
"type": "string"
},
"accountId": {
"description": "Customer's AccountID",
"type": "string"
},
"executionTime": {
"description": "Bot Execution time in seconds",
"type": "number"
},
"timestamp": {
"description": "Timestamp",
"type": "string"
},
"status": {
"description": "Execution Status",
"type": "string"
}
},
"required": [ "ticketId", "accountId", "executionTime", "timestamp", "status"]
}
"""
schema_registry_conf = {'url': 'http://localhost:8081'}
schema_registry_client = SchemaRegistryClient(schema_registry_conf)
json_serializer = JSONSerializer(schema_str,schema_registry_client,self.botexecution_to_dict)
producer_conf = {
'bootstrap.servers': "localhost:9092",
'key.serializer': StringSerializer('utf_8'),
'value.serializer': json_serializer,
'acks': 0,
}
producer = SerializingProducer(producer_conf)
print(f'Producing records to topic {topic}')
producer.poll(0.0)
try:
print(self)
producer.produce(topic=topic,key=str(uuid4()),partition=1,
value=self,on_delivery=self.delivery_report)
except ValueError:
print("Invalid Input,discarding record.....")
Now when I execute the code it should create a Kafka topic and push the JSON data to that topic, but that does not seem to work it keeps showing an error that a replication factor of 3 has been specified when there is only one broker.is there a way to define the replication factor in the above code.The kafka borker works perfectly when do the same using the Kafka cli.
Is there something that I am missing?
I'm not sure the full difference between cp-server and cp-kafka images, but you can add a variable for the default replication factor of automatically created topics
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
If that doesn't work, import and use AdminClient