I have an application, where I have some Producer Threads which need an Input Queue and a Buffer. The Buffer has an Observer, that empties the buffer once it reached a certain limit and puts batches on another Queue2. Queue2 then is read by a consumer which writes the batches.
Currently what I'm lacking in my program achitecture is a neat way to get these Queues and Buffer into the Runnables, without doing Constructor magic or Setter.
I cannot use #Inject, since the CDI Context is not propagated to the subthreads. One could implement his own context, that does things similar to the ApplicationScopedContext, but currently I'm not willing to do this (if someone has a good implementation example please let me know).
Since the JNDI Tree is propagated to the Subthreads I thought about using it, but I fear the Serialization, which is definitely not what I want with Queues.
So my question is: Are Objects that are bound into the JNDI Tree in Wildfly 10 References or are they Serialized?
Related
I am new for Vert.x. When I read writestream in the article from https://vertx.io/docs/vertx-core/java/ as following:
setWriteQueueMaxSize: set the number of object at which the write queue is
considered full, and the method writeQueueFull returns true. Note that,
when the write queue is considered full, if write is called the data will
still be accepted and queued. The actual number depends on the stream
implementation, for Buffer the size represents the actual number of bytes
written and not the number of buffers.
especially this statement - "Note that, when the write queue is considered full, if write is called the data will still be accepted and queued.", from this statement, I have a few of questions:
(1) Is there any size limitation for writing to stream? I mean, such as event bus, how many messages can be written to event bus? Does it depend on memory? suppose I keep on writing messages to event bus and message doesn't be consumed, does it cause Out of Memory in Java?
(2) If there is some limitation for writing, where and how can I check default queue size? Such as, I want to know the default queue size of Vert.x KafkaProducer, where can I check it?
Any ideas are appreciated.
There are actually three separate questions there, but I'll give it a shot anyway:
Is there any size limitation for writing to stream?
That depends on the stream implementation. But I've yet to see one implementation that actually uses WriteQueueMaxSize
how many messages can be written to event bus? Does it depend on memory?
EventBus is a special case. If nobody is consuming from EventBus, it will simply drop messages, so in such a trivial case, an infinite number could be written. But if there are consumers, and they're slow, yes, eventually you'll run out of memory. EventBus implementation currently doesn't do anything with WriteQueueMaxSize
If there is some limitation for writing, where and how can I check default queue size? Such as, I want to know the default queue size of Vert.x KafkaProducer, where can I check it?
All Vert.x project are open source, so you'll usually find them on GitHub (or other open source repo, but I don't remember any that are not on GitHub, actually).
Particularly for Vert.x KafkaProducer, you can see the code here:
https://github.com/vert-x3/vertx-kafka-client/blob/1720d5a6792f70509fd7249a47ab50b930cee7a7/src/main/java/io/vertx/kafka/client/producer/impl/KafkaProducerImpl.java#L217
I've written a CustomListener (deriving from SparkListener, etc...) and it works fine, I can intercept the metrics.
The question is about using the DataFrames within the listener itself, as that assumes the usage of the same Spark Context, however as of 2.1.x only 1 context per JVM.
Suppose I want to write to disk some metrics in json. Doing it at ApplicationEnd is not possible, only at the last jobEnd (if you have several jobs, the last one).
Is that possible/feasible???
I'm trying to measure the perfomance of jobs/stages/tasks, record that and then analyze programmatically. May be that is not the best way?! Web UI is good - but I need to make things presentable
I can force the creation of dataframes upon endJob event, however there are a few errors thrown (basically they refer to not able to propogate events to the listener) and in general I would like to avoid unnecessary manipulations. I want to have a clean set of measurements that I can record and write to disk
SparkListeners should be as fast as possible as a slow SparkListener would block others to receive events. You could use separate threads to release the main event dispatcher thread, but you're still bound to the limitation of having a single SparkContext per JVM.
That limitation is however easily to overcome since you could ask for the current SparkContext using SparkContext.getOrCreate.
I'd however not recommend the architecture. That puts too much pressure on the driver's JVM that should rather "focus" on the application processing (not collecting events that probably it already does for web UI and/or Spark History Server).
I'd rather use Kafka or Cassandra or some other persistence storage to store events to and have some other processing application to consume them (just like Spark History Server works).
I've been learning play, and I'm getting most of the major concepts, but I'm struggling with what magic the platform is doing to enable all of these things.
In particular, let's say I have a controller that does something time-intensive. Now I understand how using Futures and asynchronous processing I can make these things appear not to block, but if it's something resource intensive, of course in the end it must block somewhere. Per the documentation:
You can’t magically turn synchronous IO into asynchronous by wrapping it in a Future. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a Future, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency.
This bit I'm not understanding: if some task that I'm doing via a Future is possibly being handled in a separate thread pool, how/what magic is Scala/Play doing in the framework to coordinate these threads such that whichever thread is listening to the HTTP socket blocks long enough to do all of the complex processing (DB loads, serialization to JSON, etc. etc.) -- in separate threads, and yet somehow returning to the original blocking thread that has to send something back to the client for that request?
Disclaimer: this is a simplified answer for the general problem, I don't want to make this even more complex by going inside Play and Akka internals.
One method is to have a thread listening to the socket, but not writing to it, let's call it A. A spans a Future that contains, on itself, all the data needed for the computation. It is important that you don't confuse the thread that does the processing with the data that is being processed, as the data (memory) is shared by all threads (and sometimes needs explicit synchronization). The future will be processed (eventually), by a thread B.
Now, do I need for A to block until B is done? It could (and in many general cases that might be the right solution), but in this case, we hardly want to stop listening to our socket. So no, we don't, A forgets everything about the message and carries on with its life.
So when B is done, the Future might be mapped or have a listener that sends the proper response. B itself can send it given the information that it has on the original message! You just need to be careful synchronizing access to the socket, to avoid colliding with a possible thread C that might have been processing a previous or later message in parallel.
Things can obviously get more complex by having threads spawning even more threads, queues where some threads write data and other read data, etc. (Play, being based in Akka, certainly includes a lot of message queues). But I hope to have convinced you that while this statement is correct:
You can’t magically turn synchronous IO into asynchronous by wrapping
it in a Future. If you can’t change the application’s architecture to
avoid blocking operations
Such a change in application's architecture is certainly possible in many (most?) cases, and certainly has been done inside Play.
I have an Actor that - in its very essence - maintains a list of objects. It has three basic operations, an add, update and a remove (where sometimes the remove is called from the add method, but that aside), and works with a single collection. Obviously, that backing list is accessed concurrently, with add and remove calls interleaving each other constantly.
My first version used a ListBuffer, but I read somewhere it's not meant for concurrent access. I haven't gotten concurrent access exceptions, but I did note that finding & removing objects from it does not always work, possibly due to concurrency.
I was halfway rewriting it to use a var List, but removing items from Scala's default immutable List is a bit of a pain - and I doubt it's suitable for concurrent access.
So, basic question: What collection type should I use in a concurrent access situation, and how is it used?
(Perhaps secondary: Is an Actor actually a multithreaded entity, or is that just my wrong conception and does it process messages one at a time in a single thread?)
(Tertiary: In Scala, what collection type is best for inserts and random access (delete / update)?)
Edit: To the kind responders: Excuse my late reply, I'm making a nasty habit out of dumping a question on SO or mailing lists, then moving on to the next problem, forgetting the original one for the moment.
Take a look at the scala.collection.mutable.Synchronized* traits/classes.
The idea is that you mixin the Synchronized traits into regular mutable collections to get synchronized versions of them.
For example:
import scala.collection.mutable._
val syncSet = new HashSet[Int] with SynchronizedSet[Int]
val syncArray = new ArrayBuffer[Int] with SynchronizedBuffer[Int]
You don't need to synchronize the state of the actors. The aim of the actors is to avoid tricky, error prone and hard to debug concurrent programming.
Actor model will ensure that the actor will consume messages one by one and that you will never have two thread consuming message for the same Actor.
Scala's immutable collections are suitable for concurrent usage.
As for actors, a couple of things are guaranteed as explained here the Akka documentation.
the actor send rule: where the send of the message to an actor happens before the receive of the same actor.
the actor subsequent processing rule: where processing of one message happens before processing of the next message by the same actor.
You are not guaranteed that the same thread processes the next message, but you are guaranteed that the current message will finish processing before the next one starts, and also that at any given time, only one thread is executing the receive method.
So that takes care of a given Actor's persistent state. With regard to shared data, the best approach as I understand it is to use immutable data structures and lean on the Actor model as much as possible. That is, "do not communicate by sharing memory; share memory by communicating."
What collection type should I use in a concurrent access situation, and how is it used?
See #hbatista's answer.
Is an Actor actually a multithreaded entity, or is that just my wrong conception and does it process messages one at a time in a single thread
The second (though the thread on which messages are processed may change, so don't store anything in thread-local data). That's how the actor can maintain invariants on its state.
I'm coming from Java, where I'd submit Runnables to an ExecutorService backed by a thread pool. It's very clear in Java how to set limits to the size of the thread pool.
I'm interested in using Scala actors, but I'm unclear on how to limit concurrency.
Let's just say, hypothetically, that I'm creating a web service which accepts "jobs". A job is submitted with POST requests, and I want my service to enqueue the job then immediately return 202 Accepted — i.e. the jobs are handled asynchronously.
If I'm using actors to process the jobs in the queue, how can I limit the number of simultaneous jobs that are processed?
I can think of a few different ways to approach this; I'm wondering if there's a community best practice, or at least, some clearly established approaches that are somewhat standard in the Scala world.
One approach I've thought of is having a single coordinator actor which would manage the job queue and the job-processing actors; I suppose it could use a simple int field to track how many jobs are currently being processed. I'm sure there'd be some gotchyas with that approach, however, such as making sure to track when an error occurs so as to decrement the number. That's why I'm wondering if Scala already provides a simpler or more encapsulated approach to this.
BTW I tried to ask this question a while ago but I asked it badly.
Thanks!
I'd really encourage you to have a look at Akka, an alternative Actor implementation for Scala.
http://www.akkasource.org
Akka already has a JAX-RS[1] integration and you could use that in concert with a LoadBalancer[2] to throttle how many actions can be done in parallell:
[1] http://doc.akkasource.org/rest
[2] http://github.com/jboner/akka/blob/master/akka-patterns/src/main/scala/Patterns.scala
You can override the system properties actors.maxPoolSize and actors.corePoolSize which limit the size of the actor thread pool and then throw as many jobs at the pool as your actors can handle. Why do you think you need to throttle your reactions?
You really have two problems here.
The first is keeping the thread pool used by actors under control. That can be done by setting the system property actors.maxPoolSize.
The second is runaway growth in the number of tasks that have been submitted to the pool. You may or may not be concerned with this one, however it is fully possible to trigger failure conditions such as out of memory errors and in some cases potentially more subtle problems by generating too many tasks too fast.
Each worker thread maintains a dequeue of tasks. The dequeue is implemented as an array that the worker thread will dynamically enlarge up to some maximum size. In 2.7.x the queue can grow itself quite large and I've seen that trigger out of memory errors when combined with lots of concurrent threads. The max dequeue size is smaller 2.8. The dequeue can also fill up.
Addressing this problem requires you control how many tasks you generate, which probably means some sort of coordinator as you've outlined. I've encountered this problem when the actors that initiate a kind of data processing pipeline are much faster than ones later in the pipeline. In order control the process I usually have the actors later in the chain ping back actors earlier in the chain every X messages, and have the ones earlier in the chain stop after X messages and wait for the ping back. You could also do it with a more centralized coordinator.