Kafka - increase partition count of existing topic through Confluent REST Api - rest

I need to add partitions to an existing Kafka topic.
I'm aware that it is possible to use the /bin/kafka-topics.sh script to achieve this, but I would prefer to do this through the Confluent REST api.
As far as I see there is no documented endpoint in the api reference, but I wonder if someone else here was able to make this work.
Edit: As it does seem to be impossible to use the REST api here, I wonder what the best practice is for adding partitions to an existing topic in a containerized setup. E.g. if there is a custom partioning scheme that maps customer ids to specific partitions. In this case the app container would need to adjust the partition count of the kafka container.

The Confluent REST Proxy has no such endpoint for topic update administration.
You would need to use the shell script or the corresponding AdminClient class that the shell script uses

This is the solution I ended up with:
Create a small http service that is is deployed within the kafka docker image
The http service accepts requests to increase the partition count and directs the requests to the kafka admin scripts (bin/kafka/kafka-topics.sh)
Something similar could have been achieved by using the AdminClient NewPartitions api in the Java Kafka lib. This solution has the advantage that the kafka docker image does not have to be changed, because the AdminClient can connect through network from another container.
For a production setup the AdminClient is preferrable, I decided for the integrated script approach because of the chosen language (rust).

Related

Connecting to topics using Rest proxy

I am new to Kafka .I have implemented my consumer as normal Java springboot application.I need to connect to the topic deployed on remote broker using Kafka rest proxy.
I am not able to understand how it will function differently if i use Kafka rest proxy.Where i should do change in my code to include the rest proxy.Do i need to structure my code complete different as i didn't think about rest proxy while creation.
I maybe wrong with the terminologies.
Any help or guidance would be of great help.
REST proxy would be used with any HTTP client, not a Kafka consumer (so create a WebClient bean rather than a ConsumerFactory, etc)
You can refer its documentation for how you can consume records over HTTP, but, simply put, the code will be completely different up until you parse the data

Alternative of Confluent REST Proxy

We have some applications which want to communicate with Kafka using REST API calls to both consume and produce messages. If we do not want to use Confluent REST Proxy, what are the options ?
One possible alternative is the Strimzi Kafka Bridge (https://github.com/strimzi/strimzi-kafka-bridge).
It's part of the broader Strimzi project about running Kafka on Kubernetes but work even running as standalone (when your Kafka cluster is on bare metal).
Of course it's open source and Apache 2.0 licensed.
the reason [not to use it] is monetary
You can use the Confluent REST Proxy with no software/licensing costs.
We are thinking of not buying any additional hardware for this new request and use existing configuration to meet the requirement.I am mostly interested to know if consumer/producer can be created to meet this requirement
You don't need extra hardware.
Pick an existing server with at least 2GB available of memory, and run kafka-rest-start and see how well it works
if we can create Rest-API calls which will be used by other applications to consume data from Kafka and push data to Kafka
That's the main purpose of REST Proxy, yes.

Ingest Streaming Data to Kafka via http

I am very new with Kafka and Streaming Data in general. What I am trying to do is to ingest data which is to be sent via http to kafka. My research has brought me to the confluent REST proxy but I can't get it to work.
What I currently have is kafka running with a single node and single broker with kafkamanager in docker containers.
Unfortunately I can't run the full confluent platform with docker since I don't have enough memory available on my machine.
In essence my question is: How to setup a development environment where data is ingested by kafka through http?
Any help is highly appreciated!
You don't need the "full Confluent Platform" (KSQL, Control Center, included)
Zookeeper, Kafka, the REST proxy, and optionally the Schema Registry, should all only take up-to 4 GB of RAM total. If you don't even have that, then you'll need to go buy more RAM.
Note that Zookeeper and Kafka do not need to be running on the same machines as the Schema Registry or REST proxy, so if you have multiple machines, then you can save some resources that way as well.
To run one Kafka broker, zookeeper and schema registry, 1Gb is usually enough (in dev).
If you do not want for some reason to use Confluent REST proxy, you can write your own. It's quite straightforward: "on request, parse your incoming JSON, validate data, construct your message (in Avro?) and produce it to Kafka".
In this article, you'll find some configuration to press Kafka and ZK on heap memory: https://medium.com/#saabeilin/kafka-hands-on-part-i-development-environment-fc1b70955152
Here you can read how to produce/consume messages with Python:
https://medium.com/#saabeilin/kafka-hands-on-part-ii-producing-and-consuming-messages-in-python-44d5416f582e
Hope these help!

Listen to a topic continiously, fetch data, perform some basic cleansing

I'm to build a Java based Kafka streaming application that will listen to a topic X continiously, fetch data, perform some basic cleansing and write to a Oracle database. The kafka cluster is outside my domain and have no ability to deploy any code or configurations in it.
What is the best way to design such a solution? I came across Kafka Streams but was confused as to if it can be used for 'Topic > Process > Topic' scenarios?
I came accross Kafka Streams but was confused as to if it can be used for 'Topic > Process > Topic' scenarios?
Absolutely.
For example, excluding the "process" step, it's two lines outside of the configuration setup.
final StreamsBuilder builder = new StreamsBuilder();
builder.stream("streams-plaintext-input").to("streams-pipe-output");
This code is straight from the documentation
If you want to write to any database, you should first check if there is a Kafka Connect plugin to do that for you. Kafka Streams shouldn't really be used to read/write from/to external systems outside of Kafka, as it is latency-sensitive.
In your case, the JDBC Sink Connector would work well.
The kafka cluster is outside my domain and have no ability to deploy any code or configurations in it.
Using either solution above, you don't need to, but you will need some machine with Java installed to run a continous Kafka Streams application and/or Kafka Connect worker.

Publish message to kafka via http

I'm new with kafka and I'm trying to publish data from external application via http but I cannot find the way to do this.
I already created a topic in kafka and test it producing and consuming the message but I don't know how to insert/publish message via http, I tried to invoke the following url to retrieve the topics but it does not retrieve any data http://servername:2181/topics/
I'm using cloudera 5.12.1.
You can access to your topics, if it was already created, using APIs. The easy way...(see client list)
Or see Connects Config to manage connectors by REST (rest.host.name, rest.port parameters). But only connectors...
To consume or produce message in a topic, use a middleware. IT is more feaseble.
Check out the open source Kafka REST Proxy from Confluent. It does exactly what you want.
You can get it standalone, or as part of Confluent Platform.