If I use Hbase Cluster, does every slave have the same data or it could be partitioned?
What are the best practices?
HBase rows are ordered by the key and automatically arranged into small partitions called regions. Each server handles some of the automatically partitions the data into regions (see this question for more details).
You can let Hbase control the splitting or pre-split yourself to control the load on the cluster
Related
This is more of an architectural question. I'm learning about Event-Driven Architecture and Streaming Systems with Apache Kafka. I've learned about Event Sourcing and CQRS and have some basic questions regarding implementation.
For example, consider a streaming application where we are monitoring vehicular events of drivers registered in our system. These events will be coming in as a KStream. The drivers registered in the system will be in a KTable, and we need to join the events and drivers to derive some output.
Assume that we insert a new driver in the system by a microservice, which pushes the data in a Cassandra table and then to the KTable topic by Change Data Capture.
since Kafka topics have a TTL associated with them, how do we make sure that the driver records are not dropped?
I understand that Kafka has a persistent state store that can maintain the required state, but can I depend on it like a Cassandra table? Is there a size consideration?
If the whole application, and all kafka brokers and consumer nodes are terminated, can the application be restarted without loss of driver records in the KTable?
If the streaming application is Kubernetes based, how would I maintain the persistent disk volumes of each container and correctly attach them as containers come and go?
Would it be preferable to join the event stream with the driver table in Cassandra using Spark Streaming or Flink? Can Spark and Flink still maintain data locality as their streaming consumers will be distributed by Kafka partitions, and the Cassandra data by I don't know what?
EDIT: - I realized Spark and Flink would be pulling data from Cassandra on the respective nodes depending on what keys they have. Kafka Streaming has the advantage that the Stream and KTable to join will already be data local.
KTables don't have a TTL since they are built from compacted topics (infinite retention).
Yes, you need to maintain storage directories for persistent Kafka StateStores. Since those stores would be on-disk, no records should be dropped from them upon broker/app restarts until you actively clear the state directories from the app instance hosts.
Spark/Flink do not integrate with Kafka Streams stores, and have their own locality considerations. I do believe Flink offers RocksDB state, and both broadcast data for remote-joins, otherwise, joining Kafka record keys requires both topics have matching partition counts - this way partitions are assigned to the same instances/executors, similar to Kafka Streams joins.
According to the schema data comes to Kafka, then to stream and Mapr-DB.
After storing data in DB, user can display data on the map.
Question is, why we use DB to dispaly data on the map if Kafka is already DB.
It seems to me more slowly to get realtime data from Mapr-DB that from Kafka.
What do you think, why this example uses this appoarch?
The core abstraction Kafka provides for a stream of records is known as topic. You can imagine topics as the tables in a database. A database (Kafka) can have multiple tables (topics). Like in databases, a topic can have any kind of records depending on the usecase. But note that Kafka is not a database.
Also note that in most cases, you would have to configure a retention policy. This means that messages at some point will be deleted based on a configurable time or size based retention policy. Therefore, you need to store the data into a persistent storage system and in this case, this is your Database.
You can read more about how Kafka works in this blog post.
My organisation have MongoDB which stores application based time-series data. Now we are trying to create a data pipeline for analytics and visualisation. Due to time-series data we plan to use Druid as intermediate storage where we can do the required transformation and then use Apache Superset to visualise. Is there any way to migrate required data (not only updates) from MongoDB to Druid?
I was thinking about Apache Kafka but from what I have read, I understood that it will work better only to stream the changes happening in topics (topic associated with tables) which already exists in MongoDB and Druid. But what if there is a table of at least 100,000 records which exists only in MongoDB and first I wish to push whole table to Druid, will Kafka work in this scenario?
I'm loading streams from Kafka using the Druid Kafka indexing service.
But the data I uploaded is always changed, so I need to reload it again and avoid duplicates and collisions if data was already loaded.
I research docs about Updating Existing Data in Druid.
But all info about Hadoop Batch Ingestion, Lookups .
Is it possible to update existing Druid data during Kafka streams?
In other words, I need to rewrite the old values with new ones using Kafka indexing service (streams from Kafka).
May be any kind of setting to rewrite duplicates?
Druid is in a way a time-series database where the data gets "finalised" and written to a log every time-interval. It does aggregations and optimises columns for storage and easy queries when it "finalises" the data.
By "finalising", what I mean is that Druid assumes that the data for the specified interval is already present and it can safely do its computations on top of them. So this in effect means that there is no support for you to update the data (like you do in a database). Any data that you write is treated as a new data and it keeps adding to its computations.
But Druid is different in the sense it provides a way to upload historical data for the same time period the real-time indexing has already taken place. This batch upload will overwrite any segments with the new ones and further queries will reflect the latest uploaded batch data.
So I am afraid the only option would be to do batch ingestion. Maybe you could still send the data to Kafka, but have a spark/gobbin job that does de-duplication and write to Hadoop. Then have a simple cron job to re-index these as a batch onto Druid.
Reading through the documentation (http://snappydatainc.github.io/snappydata/streamingWithSQL/) and had a question about this item:
"Reduced shuffling through co-partitioning: With SnappyData, the partitioning key used by the input queue (e.g., for Kafka sources), the stream processor and the underlying store can all be the same. This dramatically reduces the need to shuffle records."
If we are using Kafka and partition our data in a topic using a key (single value). Is it possible to map this single key from kafka to multiple partition keys identified in the snappy table?
Is there a hash of some sort to turn multiple keys into a single key?
The benefit of reduced shuffling seems significant and trying to understand the best practice here.
thanks!
With DirectKafka stream, each partition pulls the data from own designated topic. If no partitioning is specified for the storage table, then each DirectKafka partition will put only to local storage buckets and then everything will line up well without requiring anything extra. The only thing to take care of is enough number of topics (thus partitions) for better concurrency -- ideally at least as many as total number of processor cores in the cluster so all cores are busy.
When partitioning storage tables explicitly, SnappyData's store has been adjusted to use the same hashing as Spark's HashPartitioning (for "PARTITION_BY" option of both column and row tables) since that is the one used at Catalyst SQL execution layer. So execution and storage are always collocated.
However, aligning that with ingestion from DirectKafka partitions will require some manual work (align kafka topic partitioning with HashPartitioning, then having the preferred locations for each DirectKafka partition match the storage). Will be simplified in coming releases.