For testing purpose, I need to simulate client for generating 100,000 messages per second and send them to kafka topic. Is there any tool or way that can help me generate these random messages?
There's a built-in tool for generating dummy load, located in bin/kafka-producer-perf-test.sh (https://github.com/apache/kafka/blob/trunk/bin/kafka-producer-perf-test.sh). You may refer to https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java#L106 to figure out how to use it.
One usage example would be like that:
bin/kafka-producer-perf-test.sh --broker-list localhost:9092 --messages 10000000 --topic test --threads 10 --message-size 100 --batch-size 10000 --throughput 100000
The key here is the --throughput 100000 flag which indicated "throttle maximum message amount to approx. 100000 messages per second"
The existing answers (e.g., kafka-producer-perf-test.sh) are useful for performance testing, but much less so when you need to generate more than just "a single stream of raw bytes". If you need, for example, to simulate more realistic data with nested structures, or generate data in multiple topics that have some relationship to each other, they are not sufficient. So if you need more than generating a bunch of raw bytes, I'd look at the alternatives below.
Update Dec 2020: As of today, I recommend the use of https://github.com/MichaelDrogalis/voluble. Some background info: The author is the product manager at Confluent for Kafka Streams and ksqlDB, and the author/developer of http://www.onyxplatform.org/.
From the Voluble README:
Creating realistic data by integrating with Java Faker.
Cross-topic relationships
Populating both keys and values of records
Making both primitive and complex/nested values
Bounded or unbounded streams of data
Tombstoning
Voluble ships as a Kafka connector to make it easy to scale and change serialization formats. You can use Kafka Connect through its REST API or integrated with ksqlDB. In this guide, I demonstrate using the latter, but the configuration is the same for both. I leave out Connect specific configuration like serializers and tasks that need to be configured for any connector.
Old answer (2016): I'd suggest to take a look at https://github.com/josephadler/eventsim, which will produce more "realistic" synthetic data (yeah, I am aware of the irony of what I just said :-P):
Eventsim is a program that generates event data for testing and demos.
It's written in Scala, because we are big data hipsters (at least
sometimes). It's designed to replicate page requests for a fake music
web site (picture something like Spotify); the results look like real
use data, but are totally fake. You can configure the program to
create as much data as you want: data for just a few users for a few
hours, or data for a huge number of users of users over many years.
You can write the data to files, or pipe it out to Apache Kafka.
You can use the fake data for product development, correctness
testing, demos, performance testing, training, or in any other place
where a stream of real looking data is useful. You probably shouldn't
use this data to research machine learning algorithms, and definitely
shouldn't use it to understand how real people behave.
You can make use of Kafka Connect to generate random test data. Check out this custom source Connector https://github.com/xushiyan/kafka-connect-datagen
It allows you to define some settings like message template and randomizable fields to generate test data. Also check out this post for detailed demonstration.
Related
Partly for testing and debugging but also to work around an issue we are seeing in a topic where we have are unable to change the producer I would like to be able to store the value as a string in a CLOB in a database table.
I have this working as a Java based consumer but I am looking at whether this could be achieved using Kafka Connect.
Everything I have read says you need a schema with the reasoning being that how else would it know how to process the data into columns (which makes sense) but I don't want to do any processing of the data (which could be JSON but might just be text) I just want to treat the whole value as a string and load it into one column.
Is there any way this can be done within the Connect config or am I looking at adding extra processing to update the message (in which case the Java client is probably going to end up being simpler)
No, the JDBC Sink connector requires a schema to work. You could modify the source code to add in this behaviour.
I would personally try to stick with Kafka Connect for streaming data to a database since it does all the difficult stuff (scale out, restarts, etc etc etc) very well. Depending on the processing that you're talking about, it could well be that Single Message Transform would be very applicable, since they fit into the Kafka Connect pipeline. Or for more complex processing, Kafka Streams or ksqlDB.
I am going through the documentation, and there seems to be there are lot of moving with respect to message processing like exactly once processing , at least once processing . And, the settings scattered here and there. There doesnt seem a single place that documents the properties need to be configured rougly for exactly once processing and atleast once processing.
I know there are many moving parts involved and it always depends . However, like i was mentioning before , what are the settings to be configured atleast to provide exactly once processing and at most once and atleast once ...
You might be interested in the first part of Kafka FAQ that describes some approaches on how to avoid duplication on data production (i.e. on producer side):
Exactly once semantics has two parts: avoiding duplication during data
production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data
production:
Use a single-writer per partition and every time you get a network
error check the last message in that partition to see if your last
write succeeded
Include a primary key (UUID or something) in the
message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be
duplicate-free. However, reading without duplicates depends on some
co-operation from the consumer too. If the consumer is periodically
checkpointing its position then if it fails and restarts it will
restart from the checkpointed position. Thus if the data output and
the checkpoint are not written atomically it will be possible to get
duplicates here as well. This problem is particular to your storage
system. For example, if you are using a database you could commit
these together in a transaction. The HDFS loader Camus that LinkedIn
wrote does something like this for Hadoop loads. The other alternative
that doesn't require a transaction is to store the offset with the
data loaded and deduplicate using the topic/partition/offset
combination.
I want to use Lagom to build a data processing pipeline. The first step in this pipeline is a service using a Twitter client to supscribe to a stream of Twitter messages. For each new message I want to persist the message in Cassandra.
What I dont understand is given I model my Aggregare root as a List of TwitterMessages for example, after running for some time this aggregare root will be several gigabytes in size. There is no need to store all the TwitterMessages in memory since the goal of this one service is just to persist each incomming message and then publish the message out to Kafka for the next service to process.
How would I model my aggregate root as Persistent Entitie for a stream of messages without it consuming unlimited resources? Are there any example code showing this usage if Lagom?
Event sourcing is a good default go to, but not the right solution for everything. In your case it may not be the right approach. Firstly, do you need the Tweets persisted, or is it ok to publish them directly to Kafka?
Assuming you need them persisted, aggregates should store in memory whatever they need to validate incoming commands and generate new events. From what you've described, your aggregate doesn't need any data to do that, so your aggregate would not be a list of Twitter messages, rather, it could just be NotUsed. Each time it gets a command it emits a new event for that Tweet. The thing here is, it's not really an aggregate, because you're not aggregating any state, you're just emitting events in response to commands with no invariants or anything. And so, you're not really using the Lagom persistent entity API for what it was made to be used for. Nevertheless, it may make sense to use it in this way anyway, it's a high level API that comes with a few useful things, including the streaming functionality. But there are also some gotchas that you should be aware of, you put all your Tweets in one entity, you limit your throughput to what one core on one node can do sequentially at a time. So maybe you could expect to handle 20 tweets a second, if you ever expect it to ever be more than that, then you're using the wrong approach, and you'll need to at a minimum distribute your tweets across multiple entities.
The other approach would be to simply store the messages directly in Cassandra yourself, and then publish directly to Kafka after doing that. This would be a lot simpler, a lot less mechanics involved, and it should scale very nicely, just make sure you choose your partition key columns in Cassandra wisely - I'd probably partition by user id.
I'm working on a project where we need to stream real-time updates from Oracle to a bunch of systems (Cassandra, Hadoop, real-time processing, etc). We are planing to use Golden Gate to capture the changes from Oracle, write them to Kafka, and then let different target systems read the event from Kafka.
There are quite a few design decisions that need to be made:
What data to write into Kafka on updates?
GoldenGate emits updates in a form of record ID, and updated field. These changes can be writing into Kafka in one of 3 ways:
Full rows: For every field change, emit the full row. This gives a full representation of the 'object', but probably requires making a query to get the full row.
Only updated fields: The easiest, but it's kind of a weird to work with as you never have a full representation of an object easily accessible. How would one write this to Hadoop?
Events: Probably the cleanest format ( and the best fit for Kafka), but it requires a lot of work to translate db field updates into events.
Where to perform data transformation and cleanup?
The schema in the Oracle DB is generated by a 3rd party CRM tool, and is hence not very easy to consume - there are weird field names, translation tables, etc. This data can be cleaned in one of (a) source system, (b) Kafka using stream processing, (c) each target system.
How to ensure in-order processing for parallel consumers?
Kafka allows each consumer to read a different partition, where each partition is guaranteed to be in order. Topics and partitions need to be picked in a way that guarantees that messages in each partition are completely independent. If we pick a topic per table, and hash record to partitions based on record_id, this should work most of the time. However what happens when a new child object is added? We need to make sure it gets processed before the parent uses it's foreign_id
One solution I have implemented is to publish only the record id into Kafka and in the Consumer, use a lookup to the origin DB to get the complete record. I would think that in a scenario like the one described in the question, you may want to use the CRM tool API to lookup that particular record and not reverse engineer the record lookup in your code.
How did you end up implementing the solution ?
Does Storm support dynamic topology? The functionality I want from this is to dynamically change the topology according to the user requirement while the Storm topology is running. For example, when user want to know the top-10 words of a stream, I use the top-10 bolt to process it, when user want to know something else, I use the other bolt to process the stream and 'unplug' the top-10 bolt.
I know it could be done by partition the stream or duplicate the stream and alway running every functionalities and only demo the data we want, or we could shut down the stream and update another topology, but is there a 'hot plug-in' way to do that?
You can't dinamically change a Storm topology's structure, i.e. modify the spouts and bolts wiring. A Storm topology's wiring is always static.
However, you could implement the needed functionality in other ways you already described. IMHO, the best, most logical way would be by running multiple topologies -- in case the data processing differs greatly. But if most of the processing is similar in both cases, just duplicate the source stream and process the data in different branches of the same topology.
It was added on STORM-561, on 03/Jun/15:
https://issues.apache.org/jira/browse/STORM-561
There is no built in way to do this (switch out one bolt for another), but what you can do is write a bolt that executes arbitrary code based on the input it receives. So long as your input and output has the same structure in storm (same tuples emitted), you could theoretically execute whatever you wanted at run time in your bolt. This is especially easy if you build your bolt in Clojure, but it's possible in essentially every language you can use with Storm.
However, this probably doesn't make a lot of sense as most computations you'll want to do involve more than one bolt and lend themselves to passing differently structured tuples around. As schiavuzzi already said in their answer, you're probably better off running multiple topologies if there are multiple, independent computations you'd like to do to a stream.
For hot deployment there is a new streaming platform from eBay.
Jetstream: https://github.com/pulsarIO/jetstream.
It has a built in config management tool and your config sits in mongodb. When user modify the config bean, the tool will publish the notification to zookeeper, the corresponding JetStream applications will be get notified and change the config dynamically