I am wondering about the choice of implementing an application processing events coming from Kafka, I have in mind two architecture patterns:
an application developed using the Apache Storm or Apache Flink framework that would process events consumed from Kafka
a Java application (or python, C#...), deployed X times (scalable depending on traffic), which would process events coming from Kafka
I find it difficult to see which of the scenarios is the most interesting.
Someone could help me on this topic ?
It's hard to give some definitive advice with so little information available. So I leave my response vague until you provide more specific information:
Choosing a processing framework over a native implementation gives you the following advantages:
Parallel processing with (in theory) infinite scalability: If you ever expect that you cannot process all events in a single thread in a timely manner, you first need to scale up (more threads) and eventually scale out (more machines). A frameworks takes care of all synchronization between threads and machines, so you just need to write sequential code glued together with some high-level primitives (similar to LINQ in C#).
Fault tolerance: What happens when your code screws up (some edge case not implemented)? When you run out of resources? When network (to Kinesis or other machines) temporarily breaks? A framework takes care of all these nasty little details.
In case of failure, when you restart application, most frameworks give you some form of exactly once processing: How do you avoid losing data? How do you avoid duplicates when reprocessing old data?
Managed state: If your application needs to remember things for a certain time (calculating sums/average or joining data), how do you ensure that the state is kept in sync with data in case of failure?
Advanced features: time triggers, complex event processing (=pattern matching on events), writing to different sinks (Kafka for low latency, s3 for batch processing)
Flexibility of storage: if you want to try out a different storage system, it's much easier to change source/sink in an application writing in a framework.
Integration in deployment platforms: If you want to scale to several machines, it's usually much easier to scale a platform that already offers related integration (at the time of writing that should be mostly Kubernetes). But all frameworks also support simple local setups where you just scale-up on one (bigger) machine.
Low-level optimizations: When using new engines with higher abstractions, it's possible that the frameworks generate code that is much more efficient than what you can implement yourself (with specific memory layout or serialized data processing).
The big downsides are usually:
Complexity of the framework: you need to understand how the framework works from a user's perspective. However, you usually save time by not going into the details of writing a custom consumer/producer, so it's not as bad as it initially seems.
Flexibility in code: you cannot write arbitrary code anymore. Since the framework handles parallelism for you, you need to think in terms of chunks of data and adjust your algorithms accordingly. Standard SQL operations are usually directly supported though in one form or another.
Less control over resource usage: since the platform schedules the task across machines, you may end up with unfortunate assignments and the platform may give you too little options to fix it. Note that most applications are more intrinsically bound to bad resource utilization because of data skew and suboptimal algorithms though.
Related
I am new to Alpakka and am considering using it for system integration. What would be the ideal way to maintain the state of the Akka Streams sources across application restarts ?
For example: let's assume I'm using something as follows to continuously read some input data and dump it somewhere. What if it runs for like 4h, then the full JVM crashes and restarts (e.g. k8s restarts my pod or so):
someSource
.via(someTransformation)
.via(someOtherTransformation)
.toMap(...)
.run()
I understand that if someSource is a Kafka source or Kinesis source or some other stateful source, they can keep track of their offset or checkpoint and restart more or less where they left off.
However, many other sources have no such concept, e.g. the Cassandra source, the File source or the RDBMs source. For example, if I shutdown and restart the code provided in the rdms example, it will restart from the top each time.
Am I understanding correctly that there is no mechanism to address that out of the box, s.t. we have to handle it manually ? I would have imagined that this feature would be desired so commonly that it would be handled somehow. If not, how do people typically address that ? Do you use Akka persistence to store some cursors in a few actors? Or do you store the origin offset together with the output data and re-read it on startup?
Or am I looking at all this the wrong way?
It is a feature that is extremely commonly desired, for the reason you suggest.
However, the only generic, reliable way to implement this would be using akka persistence which is probably the single heaviest (e.g. it requires choosing a database) dependency in the Akka ecosystem. Beyond that, it's going to be somewhat source specific. Some (e.g. Kafka, Kinesis) have a means of doing this that's going to fit the bill in nearly every scenario, but for the others, the details of how to store the state of consumption are something on which there will be a lot of differences of opinion. Akka and Alpakka in general tend to shy away from opinionation.
We are building an event sourced system at my company, relying on Kafka.
In order to be GDPR compliant, we need to be able to update the events.
Our idea is to use the compaction and tombstone capabilities.
This means that we cannot use the default partitioning strategy, as we want each message to have an unique key (in order to overwrite a specific message), but we still want events occuring on the same aggregate to end on the same partition.
Which brings us to the creation of a custom partitioner (basically copying the "hash modulo" logic of the default partitioner, but using a different value than the message key to compute the hash).
The issue is that we're evolving in a polyglot environment (we have php, python and Java/Kotlin services publishing and consuming events).
We want to ensure that all these services will produce messages to the same partition given a specific partition key (in case different services will publish events to the same topic).
Our main idea was to use a common hashing algorithm, but we find it hard to find one with both a strong distribution guarantee and a good stability (not just part of an experimental lib).
PHP natively supports a wide range of hashing algorithms, but we find it hard to find the same support in the other languages.
As Kafka default partitioner relies on murmur2, we started looking in that direction as well. Unfortunately, it is not natively supported by php (although some implementations exist). Furthermore, this algorithm uses a seed, which means that we will need to use the exact same seed for all our publisher services, which is starting to make the approach look quite complex.
However, we could be looking at the design from the wrong angle. Sharing event store write capabilities across polyglot services might not be a good idea and each services could have its own partitioning logic as long as it ensures the "one partition per aggregate" requirement. The thing is that we have to think this ahead, because no technical safeguard will prevent one service in the future to publish on a "shared" event stream (and not using the exact same partitioning logic will have a huge impact when it happens).
Would someone has experience with building an event store with Kafka in a polyglot environment, and could highlight us on this specific topic, please?
http://www.adaptivecomputing.com/products/open-source/torque/
https://research.cs.wisc.edu/htcondor/
I am looking for a program to perform distributed computing (no parallel computing needed though) which has:
a scheduler
a queue management (FIFO, or preferably something more advanced)
a good statistics report
ability to run on a heterogeneous cluster (a set of machines with different characteristics such as cpu and memory)
and very important: a good responsivness (a few seconds maximum between the trigger of the task and the actual start of the execution: I have heard that this may be tricky to achieve with HTCondor and TORQUE? What about Apache Mesos?)
There is a quite large wikipedia page with comparisons, but you will hardly find large differences. My guess would be that most things could theoretically be done in either framework. The things you list all depend on perspective (people e.g. commonly write their own sophisticated statistics from HTCondor logs). Regarding responsiveness: HTCondor works fine to schedule interactive notebooks if there are enough ressources for the workers to pick up the job. Few seconds is often no problem, but there are hardly guarantees. These are High Throughput Systems, but not low-latency systems. You should preallocate workers and scale them up and down if you care for latency (here supports for other frameworks on top helps much more than native latency).
I try my best to highlight the main foci of each Project from my perspective, that are important for a practical decision:
Target audience
Mesos:
PaaS/IaaS targeted to run other schedulers (you can run Torque on top of Mesos)
particularly interop with big data frameworks such as Spark & Kafka
vs.
Both HTCondor & Torque:
fair-share batch processing particularly in scientific clusters (High Throughput Computing)
Eco-system
Mesos:
Apache open source project with community
vs.
HTCondor:
Open Source maintained by UW-Madison with classical user mailing-list
vs.
TORQUE:
Proprietary, Commercial support
Ease of use
(partially this is statistics, but more the dashboard style)
Mesos & TORQUE:
Web UI
commonly integrations with other frameworks available (for TORQUE look for PBS)
HTCondor:
new, developing REST and python interaces but no common GUI
lagging behind a tiny bit in framework support (R batchtools, lately is has had dask support)
I am new to message brokers like RabbitMQ which we can use to create tasks / message queues for a scheduling system like Celery.
Now, here is the question:
I can create a table in PostgreSQL which can be appended with new tasks and consumed by the consumer program like Celery.
Why on earth would I want to setup a whole new tech for this like RabbitMQ?
Now, I believe scaling cannot be the answer since our database like PostgreSQL can work in a distributed environment.
I googled for what problems does the database poses for the particular problem, and I found:
polling keeps the database busy and low performing
locking of the table -> again low performing
millions of rows of tasks -> again, polling is low performing
Now, how does RabbitMQ or any other message broker like that solves these problems?
Also, I found out that AMQP protocol is what it follows. What's great in that?
Can Redis also be used as a message broker? I find it more analogous to Memcached than RabbitMQ.
Please shed some light on this!
Rabbit's queues reside in memory and will therefore be much faster than implementing this in a database. A (good)dedicated message queue should also provide essential queuing related features such as throttling/flow control, and the ability to choose different routing algorithms, to name a couple(rabbit provides these and more). Depending on the size of your project, you may also want the message passing component separate from your database, so that if one component experiences heavy load, it need not hinder the other's operation.
As for the problems you mentioned:
polling keeping the database busy and low performing: Using Rabbitmq, producers can push updates to consumers which is far more performant than polling. Data is simply sent to the consumer when it needs to be, eliminating the need for wasteful checks.
locking of the table -> again low performing: There is no table to lock :P
millions of rows of task -> again polling is low performing: As mentioned above, Rabbitmq will operate faster as it resides RAM, and provides flow control. If needed, it can also use the disk to temporarily store messages if it runs out of RAM. After 2.0, Rabbit has significantly improved on its RAM usage. Clustering options are also available.
In regards to AMQP, I would say a really cool feature is the "exchange", and the ability for it to route to other exchanges. This gives you more flexibility and enables you to create a wide array of elaborate routing typologies which can come in very handy when scaling. For a good example, see:
(source: springsource.com)
and: http://blog.springsource.org/2011/04/01/routing-topologies-for-performance-and-scalability-with-rabbitmq/
Finally, in regards to Redis, yes, it can be used as a message broker, and can do well. However, Rabbitmq has more message queuing features than Redis, as rabbitmq was built from the ground up to be a full-featured enterprise-level dedicated message queue. Redis on the other hand was primarily created to be an in-memory key-value store(though it does much more than that now; its even referred to as a swiss army knife). Still, I've read/heard many people achieving good results with Redis for smaller sized projects, but haven't heard much about it in larger applications.
Here is an example of Redis being used in a long-polling chat implementation: http://eflorenzano.com/blog/2011/02/16/technology-behind-convore/
PostgreSQL 9.5
PostgreSQL 9.5 incorporates SELECT ... FOR UPDATE ... SKIP LOCKED. This makes implementing working queuing systems a lot simpler and easier. You may no longer require an external queueing system since it's now simple to fetch 'n' rows that no other session has locked, and keep them locked until you commit confirmation that the work is done. It even works with two-phase transactions for when external co-ordination is required.
External queueing systems remain useful, providing canned functionality, proven performance, integration with other systems, options for horizontal scaling and federation, etc. Nonetheless, for simple cases you don't really need them anymore.
Older versions
You don't need such tools, but using one may make life easier. Doing queueing in the database looks easy, but you'll discover in practice that high performance, reliable concurrent queuing is really hard to do right in a relational database.
That's why tools like PGQ exist.
You can get rid of polling in PostgreSQL by using LISTEN and NOTIFY, but that won't solve the problem of reliably handing out entries off the top of the queue to exactly one consumer while preserving highly concurrent operation and not blocking inserts. All the simple and obvious solutions you think will solve that problem actually don't in the real world, and tend to degenerate into less efficient versions of single-worker queue fetching.
If you don't need highly concurrent multi-worker queue fetches then using a single queue table in PostgreSQL is entirely reasonable.
Can a shared ready queue limit the scalability of a multiprocessor system?
Simply put, most definetly. Read on for some discussion.
Tuning a service is an art-form or requires benchmarking (and the space for the amount of concepts you need to benchmark is huge). I believe that it depends on factors such as the following (this is not exhaustive).
how much time an item which is picked up from the ready qeueue takes to process, and
how many worker threads are their?
how many producers are their, and how often do they produce ?
what type of wait concepts are you using ? spin-locks or kernel-waits (the latter being slower) ?
So, if items are produced often, and if the amount of threads is large, and the processing time is low: the data structure could be locked for large windows, thus causing thrashing.
Other factors may include the data structure used and how long the data structure is locked for -e.g., if you use a linked list to manage such a queue the add and remove oprations take constant time. A prio-queue (heaps) takes a few more operations on average when items are added.
If your system is for business processing you could take this question out of the picture by just using:
A process based architecure and just spawning multiple producer consumer processes and using the file system for communication,
Using a non-preemtive collaborative threading programming language such as stackless python, Lua or Erlang.
also note: synchronization primitives cause inter-processor cache-cohesion floods which are not good and therefore should be used sparingly.
The discussion could go on to fill a Ph.D dissertation :D
A per-cpu ready queue is a natural selection for the data structure. This is because, most operating systems will try to keep a process on the same CPU, for many reasons, you can google for.What does that imply? If a thread is ready and another CPU is idling, OS will not quickly migrate the thread to another CPU. load-balance kicks in long run only.
Had the situation been different, that is it was not a design goal to keep thread-cpu affinities, rather thread migration was frequent, then keeping separate per-cpu run queues would be costly.