Connecting REST Proxy to Confluent Cloud trying to connect to zookeeper - apache-kafka

I'm trying to connect kafka rest proxy to confluent cloud.
kafka-rest-start ccloud-kafka-rest.properties
Here is my file with properties ccloud-kafka-rest.properties
client.ssl.endpoint.identification.algorithm=https
client.sasl.mechanism=PLAIN
consumer.request.timeout.ms=20000
bootstrap.servers=***-****.us-east-1.aws.confluent.cloud:9092
consumer.retry.backoff.ms=500
client.security.protocol=SASL_SSL
id=kafka-rest-with-ccloud
producer.acks=1
admin.request.timeout.ms=50000
client.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="***" password="***";
After I run kafka-rest-start it tries connect to zookeeper (zookeeper.connect = localhost:2181).
ERROR Server died unexpectedly: (io.confluent.kafkarest.KafkaRestMain:63)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server 'localhost:2181' with timeout of 30000 ms
Also I tried to set zookeeper host empty, It didn't help.
How to turn off connecting to zookeeper?

The default value for the zookeeper.connect property is localhost:2181. See here
You need to populate it with the address of the zookeeper of the confluent cloud.
See the instructions for connecting a rest-proxy to kafka cloud here

Related

Apache kafka server start binds to at path /brokers/ids/0 with addresses deploy.static.akamaitechnologies.com:9092 instead of localhost

I am trying to run apache kafka locally, the zookeeper is running fine and binds to port 2181 with the localhost. When I start kafka server, it fails with the following error. What could be the reason?
WARN [Controller id=0, targetBrokerId=0] Connection to node 0
(a23-202-231-169.deploy.static.akamaitechnologies.com/23.202.231.169:9092)
could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
By default in the Kafka properties, listeners=PLAINTEXT://:9092, which will use the broker hostname. Change this to listeners=PLAINTEXT://127.0.0.1:9092 if you only want local connections
For client connections, you'll also need to configure advertised.listeners, which controls the IP/hostname returned to clients

Kafka server can't running while it will not attempt to authenticate using SSAL (unknown error)

i have a problem with kafka server.
i have done running apache druid with its port 2080, while zookeeper kafka with its port 2181 to avoid crushing a zookeeper between druid and kakfa.
i dont have any problem with druid because it running correctly, and kafka zookeeper well too.
but when i try to run kafka server following this syntax:
./bin/kafka-server-star.sh config/server.properties
with its configuration:
zookeeper.connect=localhost:9092
i have issue that the error say it will not attempt authenticate using SSAL (unknow error) like the picture i drop below:try to connect kafka server getting error result
i used to well, but i dont know it going not well. everyone can help me to solve this issue?
I assume you are referring to zookeeper.connect property in server.properties Kafka configuration file.
zookeeper.connect should point to zookeeper connection string or quorum. From your example, I guess zookeeper.connect property is pointing to Kafka server port, 9092, itself. Since there is no zookeeper server running on the given address localhost:9092, zookeeper-client fails to connect to zookeeper server and throws below error
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:9092. Will not attempt to authenticate using SASL (unknown error)
Zookeeper server port, which is configured using name clientPort, can be found in zookeeper.properties configuration file.
Please try with following setting
zookeeper.connect=localhost:<clientPort, 2181 by default>
Zookeeper connection string is a comma separated host:port pairs, each corresponding to a zookeeper server.
Examples:
zookeeper.connect=localhost:2181
zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
zookeeper.connect=127.0.0.1:2181
You can also append an optional chroot string to the urls to specify the root directory for all Kafka znodes.

unable to connect to kafka broker (via zookeeper) using Conduktor client

Able to connect successfully to local kafka broker/cluster running locally (dockerized) using Conduktor, but when trying to connect to Kafka cluster running on Unix VM, getting below error.
Error:
"The broker [...] is reachable but Kafka can't connect. Ensure you have access to the advertised listeners of the the brokers and the proper authorization"
Appreciate any assistance.
running locally (dockerized)
When running in docker, you need to ensure that the ports are accessible from outside of your container. To verify this, try doing a telnet <ip> <port> and check if you are able to connect.
Since the error message says, the broker is reachable, I suppose you would be able to successfully telnet to the broker.
Next, check your broker config called advertised.listeners. Here you need to mention your IP:Port combination where IP is what you will be giving in your client program i.e. Conduktor.
An example for that would be
advertised.listeners=PLAINTEXT://1.2.3.4:9092
and then restart your broker and reconnect. If you are using ssl then you need to provide some extra configuration. See Configuring Kafka brokers for more.
Try to add in /etc/hosts (Unix-like) or C:\Windows\System32\drivers\etc\hosts (windows-like) the Kafka server in such manner kafka_server_ip kafka_server_name_in_dns (e.g. 10.10.0.1 kafka).

connect self managed ksqlDB to Confluent Cloud | managed ksql server

This is a question on how to connect self managed ksqlDB / ksql server to confluent cloud.
I have a confluent basic cluster running in https://confluent.cloud/ in the GCP asia south.
In this cluster i want to connect self managed ksqlDB to Confluent Cloud Control center.
Here is my configurations which i copied from confluent cloud and put in the managed ksqldb.
This self managed ksqldb is a single machine GCP compute unit.
The same configuration is present in following properties.
/home/confluent/confluent-5.5.1/etc/ksqldb/ksql-server.properties
and the ksql server was started using following commands.
nohup /home/confluent/confluent/confluent-5.5.1/bin/ksql-server-start /home/confluent/confluent/confluent-5.5.1/etc/ksqldb/ksql-server.properties &
Command line :
/home/confluent/confluent-5.5.1/bin/ksql
Couple of things were noted in ksql terminal :
STREAM was created successfully in the terminal but not available in the cloud.
On the command "show streams;" It is able to show the specific STREAM.
print {STREAM}; It doesnt show up data even while data is pushed to STREAM.
I have not set any host entries.
On show connectors following exception is generated in ksql terminal.
ksql> show connectors;
io.confluent.ksql.util.KsqlServerException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8083 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
localhost:8083 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed:
Connection refused (Connection refused)
Caused by: Could not connect to the server.
Caused by: Could not connect to the server.
I am expecting my ksqlDB shows up in the confluent cloud but unable to see though.
I dont know what more configurations are required, so that my self managed ksql server works and show up in confluent cloud.
seems that you are confusing some terminology here, self-managed != managed.
Managed KSQLDB is the service available on your Confluent Cloud console (last image). In there you have to add applications which spin up a KSQLDB cluster for your queries.
For self-managed KSQLDB instance running in GCP, you can connect it to Confluent Cloud, but it is not going to appear on the list of KSQLDB Applications, as you'll have to operate it yourself.
Docs:
Self-managed KSQLDB with Confluent Cloud: https://docs.confluent.io/current/cloud/cp-component/ksql-cloud-config.html
Confluent Cloud (managed) KSQLDB: https://docs.confluent.io/current/quickstart/cloud-quickstart/ksql.html#cloud-ksql-create-application

Having Kafka connected with ip as well as service name - Openshift

In our Openshift ecosystem, we have a kafka instance sourced from wurstmeister/kafka. As of now I am able to have the kafka accessible withing the Openshift system using the below parameters,
KAFKA_LISTENERS=PLAINTEXT://:9092
KAFKA_ADVERTISED_HOST_NAME=kafka_service_name
And ofcourse, the params for port and zookeper is there.
I am able to access the kafka from the pods within the openshift system. But I am unable to access kafka service from the host machine. Eventhough I am able to access the kafka pod using its IP and able to telnet the pod using, telnet Pod_IP 9092
When I am trying to connect using the kafka producer from the host machine, I am getting the below error,
2017-08-07 07:45:13,925] WARN Error while fetching metadata with
correlation id 2 : {tls21=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
And When I try to connect from Kafka consumer from the host machine using IP, it is blank.
Note: As of now, its a single openshift server. And the use case is for dev testing.
Maybe you want to take a look at this POC for having Kafka on OpenShift ?
https://github.com/EnMasseProject/barnabas