What kind of data is stored in ksqlDB? Only metadata about KStreams and KTables?
From what I understand I can create KStreams and KTables and operations between these using either the Java API or the KSQL language. But what is ksqlDB Server actually used for besides metadata about these objects when I create them using KSQL client?
I also assume that ksqlDB actually runs the streams processors so it is also an execution engine. Does ist scale automatically? Can there be multiple instances of ksqlDB server component that communicate with each other? Is it intended for scenarios that need massive scaling, or is it just some syntactic sugar suitable for people who don't like to write Java code?
EDITED
It is explained in this video: https://www.youtube.com/embed/f3wV8W_zjwE
It does not scale automatically, but you can manually deploy multiple instances of ksqlDB server and make them join the same ksqlDB cluster identified by ksql.server.id
What kind of data is stored in ksqlDB?
Nothing is really stored "in" it. All the metadata of the query history and running processors is stored in Kafka topics and on-disk KTables
I also assume that ksqlDB actually runs the streams processors so it is also an execution engine. Does ist scale automatically? Can there be multiple instances of ksqlDB server component that communicate with each other? Is it intended for scenarios that need massive scaling, or is it just some syntactic sugar suitable for people who don't like to write Java code?
All Yes.
Related
Is there a way to automatically tell Kafka to send all events of a specific topic to a specific table of a database?
In order to avoid creating a new consumer that needs to read from that topic and perform the copy explicitly.
You have two options here:
Kafka Connect - this is the standard way to connect your Kafka to a database. There are a lot of connectors. In order to choose one:
The best bet is to use the specific one for your database that is maintained by confluent.
If you don't have a specific one, the second best option is to use the JDBC connector.
Direct ingestion from the database if your database supports it (for instance Clickhouse, and MemSQL are able to load data coming from a Kafka topic). The difference between this and Kafka connects is this way it is fully supported and tested by the db vendor and you need to maintain less pieces of infrastructure.
Which one is better? It depends on:
your data volume
how much you can (and need !) to paralelize the load
and how much you can tolerate downtime or latencies.
Direct ingestion from DB is usually from one node (consumer) to Kafka.
It is good for mid-low volume data traffic. If it fails (or throttles), you might have latency issues.
Kafka connect allows you to insert data in parallel into the db using several workers. If one of the worker fails, the load is redistributed among the others. If you have a lot of data, this probably the best way to load it into the db, but you'll need to take care of the kafka connect infrastructure unless you're using a managed cloud offering.
I have a bit of confusion and I would like some clarification. I have something I'm working on. I want to have one Kafka Streams topology that will have five separate KStreams reading from their own respective topic and dumping that data into a large monolithic topic. Next I'll have a GlobalKTable that will read from that monolithic topic and materialize a global store let's say called lookupStore. I want to have this materialized global store as basically a "lookup" table for other Kafka Streams applications. I've done some reading on exposing this with an RPC layer with the application.server configuration which will be in the form of some unique host:port.
Now I want to have however many separate microservices each that are Kafka Streams applications that will perform are processing events from a KStream and then doing a lookup on lookupStore via an interactive query. For instance a .filter() operation based on whether the lookup on that lookupStore returned a value or not. So here's my confusion... let's assume I hardcode that exposed RPC layer on host:port how do I query lookupStore specifically to query it. If this was in the same topology/local instance you could just do something like lookupStore.get("key")... but how do you do this within a remote Kafka Streams instance?
Or does connecting to that RPC layer expose that state store to the remote application so that it "knows" of it and you can query the lookupStore like as if it was a local instance? Is this feasible or am I going down the wrong path?
If your microservices (which are streams applications) share the same Kafka cluster as the main streaming app (that generates GlobalKTable), then they can access the Table topic corresponding to the same application and do KTable join or lookupStore.get("key"). Also it is not recommended to do remote API calls within a stream application to do lookups, because of latency. If the two Kafak clusters are different, then you could explore replicating the topics (GlobalKTable and State Store change log topics) using something like mirror maker.
I'm still new in Spark and I want to learn more about it. I want to build and data pipeline architecture with Kafka and Spark.Here is my proposed architecture where PostgreSQL provide data for Kafka. The condition is the PostgreSQL are not empty and I want to catch any CDC change in the database. At the end,I want to grab the Kafka Message and process it in stream with Spark so i can get analysis about what happen at the same time when the CDC event happen.
However, when I try to run an simple stream, it seems Spark receive the data in stream, but process the data in batch, which not my goal. I have see some article that the source of data for this case came from API which we want to monitor, and there's limited case for Database to Database streaming processing. I have done the process before with Kafka to another database, but i need to transform and aggregate the data (I'm not use Confluent and rely on generic Kafka+Debezium+JDBC connectors)
According to my case, is Spark and Kafka can meet the requirement? Thank You
I have designed such pipelines and if you use Structured Streaming KAFKA in continuous or non-continuous mode, you will always get a microbatch. You can process the individual records, so not sure what the issue is.
If you want to process per record, then use the Spring Boot KAFKA setup for consumption of KAFKA messages, that can work in various ways, and fulfill your need. Spring Boor offers various modes of consumption.
Of course Spark Structured Streaming can be done using Scala and has a lot of support obviating extra work elsewhere.
https://medium.com/#contactsunny/simple-apache-kafka-producer-and-consumer-using-spring-boot-41be672f4e2b This article discusses the single message processing approach.
I have a use case where I need to send the data changes in relational database into a kafka-topic.
I'm able to write a simple JDBC program which executes set of queries for the changes in certain time period and write data into kafka-topic using KafkaTemplate (a wrapper provided by spring framework).
If I do the same using kafka-connect, which is to write a source connector. what benefits or overheads (if in case any) will I get?
The first thing is that you have "... to write a simple JDBC program ..." and take care of the logic of writing on both database and Kafka topic.
Kafka Connect does that for you and your business application has to write to the database only. With Kafka Connect you have more than that like fail-over handling, parallelism, scaling, ... it's all out of box for you while you should take care of them when for example you write on the database but something fails and you are not able to write to Kafka topic and so on.
Today you want to ingest from a database using a set of queries from one database to a Kafka topic, and write some bespoke code to do that.
Tomorrow you want to use a second database, or you want to change the serialisation format of your data in Kafka, or you want to scale out your ingest or you want to have high availability. Or you want to add in the ability to stream data from Kafka to another target, to ingest data also from other places. And, manage it all centrally using a standardised configuration pattern expressed just in JSON. Oh, and you want it to be easily maintainable by someone else who doesn't have to read through code but can just use a common API of Apache Kafka (which is what Kafka Connect is).
If you manage to do all of this yourself—you've just reinvented Kafka Connect :)
I talk extensively about this in my Kafka Summit session: "From Zero to Hero with Kafka Connect" which you can find online here
So I have...
1st topic that has general application logs (log4j). Stores things like HTTP API requests/responses and warnings, exceptions etc... There can be multiple logs associated to one logical business request. (These logs happen within seconds of each other)
2nd topic contains commands from the above business request which other services take action on. (The commands also happen within seconds of each other, but maybe couple minutes from the original request)
3rd topic contains events generated from actions of those other services. (Most events complete within seconds, but some can take up to 3-5 days to be received)
So a single logical business request can have multiple logs, commands and events associated to it by a uuid which the microservices pass to each other.
So what are some of the technologies/patterns that can be used to read the 3 topics and join them all together as a single json document and then dump them to lets say Elasticsearch?
Streaming?
You can use Kafka Streams, or KSQL, to achieve this. Which one depends on your preference/experience with Java, and also the specifics of the joins you want to do.
KSQL is the SQL streaming engine for Apache Kafka, and with SQL alone you can declare stream processing applications against Kafka topics. You can filter, enrich, and aggregate topics. Currently only stream-table joins are supported. You can see an example in this article here
The Kafka Streams API is part of Apache Kafka, and a Java library that you can use to do stream processing of data in Apache Kafka. It is actually what KSQL is built on, and supports greater flexibility of processing, including stream-stream joins.
You can use KSQL to join the streams.
There are 2 constructs in KSQL Table/Stream.
Currently, the Join is supported for a Stream & a table. So you need to identify the which is a good fit for what?
You don't need windowing for joins.
Benefits of using KSQL.
KSQL is easy to set up.
KSQL is SQL language which helps you to query your data quickly.
Drawback.
It's not production ready but in April-2018 the release is coming up.
Its little buggy right now but certainly will improve in a few months.
Please have a look.
https://github.com/confluentinc/ksql
Same as question Is it possible to use multiple left join in Confluent KSQL query? tried to join stream with more than 1 tables , if not then whats the solution?
And seems like you can not have multiple join keys within same query.