Remote access to Kafka - confluent-platform

I'm not able to access kafka which is in remote server from my local server.
I tried to set parameter advertised.listeners=PLAINTEXT://ip:port in server.properties of kafka where ip is the public ip of my remote machine where kafka is. But still I'm not able to connect. What should I need to do next? I'm using confluent kafka of version 3.3.0.

Related

unable to connect to kafka broker (via zookeeper) using Conduktor client

Able to connect successfully to local kafka broker/cluster running locally (dockerized) using Conduktor, but when trying to connect to Kafka cluster running on Unix VM, getting below error.
Error:
"The broker [...] is reachable but Kafka can't connect. Ensure you have access to the advertised listeners of the the brokers and the proper authorization"
Appreciate any assistance.
running locally (dockerized)
When running in docker, you need to ensure that the ports are accessible from outside of your container. To verify this, try doing a telnet <ip> <port> and check if you are able to connect.
Since the error message says, the broker is reachable, I suppose you would be able to successfully telnet to the broker.
Next, check your broker config called advertised.listeners. Here you need to mention your IP:Port combination where IP is what you will be giving in your client program i.e. Conduktor.
An example for that would be
advertised.listeners=PLAINTEXT://1.2.3.4:9092
and then restart your broker and reconnect. If you are using ssl then you need to provide some extra configuration. See Configuring Kafka brokers for more.
Try to add in /etc/hosts (Unix-like) or C:\Windows\System32\drivers\etc\hosts (windows-like) the Kafka server in such manner kafka_server_ip kafka_server_name_in_dns (e.g. 10.10.0.1 kafka).

Confluent platform: connect ignore properties file

i'm using confluent platform 5.4.0.
I've modified the server.properties file setting a static IP address for the broker, so i can access it with producers/consumers from remote machines.
Launching the platform with confluent local start the connect service starts but then goes timeout cause is unable to find a valid brokers, looking for a broker on localhost/127.0.0.1 also if i've updated also the distributed-connect.properties file with the same broker IP address.
Manually launching connect, pointing to the same file, then everything works fine.
It seems that from the confluent launcher the server.properties file was loaded but not the connect-distributed.properties
Now i'm working launching the connect service manually but i'm trying to figure if there's a way to launch everything with the confluent CLI.
Thanks!

How to reach Cloudera Kafka Broker on private network from outside?

I have a cluster inside a VPN which contains a server with private IP. I'm trying to set up a Kafka communication between an external server to my private server. My approach is to set an IP table where a public IP is pointing my private IP. Also, I opened the port 9092 and 9093 to make it reachable from outside. Now I am available to connect successfully to my server with the public IP from the external server.
telnet <public_ip> 9092
Connected to <public_ip>
My kafka broker is under a cloudera cluster and I created it with Cloudera Manager. The configuration is the following:
kafka.properties:
listeners=PLAINTEXT://<private_ip>:9092,SSL://<private_ip>:9093
advertised.listeners=PLAINTEXT://<private_ip>:9092,SSL://<private_ip>:9093
advertised.host.name:
<public_ip>
Using this broker configuration the comunication works perfectly inside the cluster either using the public_ip or private_ip of the kafka broker host.
What I see now is that I have a working broker that can be used with a public_ip and a external server that is able to reach the public_ip and it's required ports. But when I try to connect to the broker from a external server, I have the following error:
NO BROKERS AVAILABLE
There's no more information of the error. On my external server I have the kafka python package where I configure the producer as:
"bootstrap_servers": ["<publi_ip>:9092"]
on a existing TOPIC of my kafka broker.
Especifications:
private host
cloudera: CDH 5.12.0
kafka: kafka 2.2.0-1.2.2.0
zookeeper: Zookeeper 3.4.5
external host
kafka Python package: kafka-python==1.4.2
The problem is very similar to this post. But in this case he uses a forwarded port with public ip. Is any possibility to do it with ip tables? Anyone has managed to do it on a cloudera cluster?
Thank you in advance.
The question isn't specific to Cloudera or Python. And I don't think Cloudera Manager has some setting that'll set this up for you.
advertised.listeners will have to be a publicly resolvable address that can be used to access each broker individually by clients (e.g two brokers cannot have the same listener setting and be used from a port forward from the public address to the internal address)
Your setup is very similar to Kafka running in Docker or Cloud providers such as AWS, in that you're interacting over two networks, so refer to this blog for more information
Also, unless you setup some other firewall settings to prevent random access, don't expose brokers in the plaintext protocol

Having Kafka connected with ip as well as service name - Openshift

In our Openshift ecosystem, we have a kafka instance sourced from wurstmeister/kafka. As of now I am able to have the kafka accessible withing the Openshift system using the below parameters,
KAFKA_LISTENERS=PLAINTEXT://:9092
KAFKA_ADVERTISED_HOST_NAME=kafka_service_name
And ofcourse, the params for port and zookeper is there.
I am able to access the kafka from the pods within the openshift system. But I am unable to access kafka service from the host machine. Eventhough I am able to access the kafka pod using its IP and able to telnet the pod using, telnet Pod_IP 9092
When I am trying to connect using the kafka producer from the host machine, I am getting the below error,
2017-08-07 07:45:13,925] WARN Error while fetching metadata with
correlation id 2 : {tls21=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
And When I try to connect from Kafka consumer from the host machine using IP, it is blank.
Note: As of now, its a single openshift server. And the use case is for dev testing.
Maybe you want to take a look at this POC for having Kafka on OpenShift ?
https://github.com/EnMasseProject/barnabas

How to use Kafka connect to transmit data to Kafka broker in another machine?

I'm trying to use Kafka connect in Confluent platform 3.2.1 and everything works fine in my local env. Then I encountered this problem when I try to use Kafka source connector to send data to another machine.
I deploy Kafka JDBC source connector in machine A and trying to capture database A. Then I deploy a Kafka borker B(along with zk, schema registry) in machine B. The source connector cannot send data to broker B and throws the following exception:
[2017-05-19 16:37:22,709] ERROR Failed to commit offsets for WorkerSourceTask{id=test-multi-0} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
[2017-05-19 16:38:27,711] ERROR Failed to flush WorkerSourceTask{id=test-multi-0}, timed out while waiting for producer to flush outstanding 3 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:304)
I tried config the server.properties in broker B like this:
listeners=PLAINTEXT://:9092
and leave the advertised.listeners setting commented.
Then I use
bootstrap.servers=192.168.19.234:9092
in my source connector where 192.168.19.234 is the IP of machine B. Machine A and B are in the same subnet.
I suspect this has something to do with my server.properties.
How should I config to get the things done? Thanks in advance.