Are Stacks and Queue's considered LinkedList? - queue

Are Stacks and Queue considered LinkedList?
The time complexity of indexing a LinkedList is 0(n). So am I right to assume that's the same for stacks and queues.

Stacks and Queues are logical data structures that describe the operations you can perform on them.
Both a Stack and Queue can be implemented by an underlying linked list or a continuous memory array (or, TBH, any number of other implementations you could think of). The time complexity of any operation on a queue or a stack would depend on the underlying implementation.

Related

How heavy are akka actors?

I am aware this is a very imprecise question and might be deemed inappropriate for stackoverflow. Unfortunately smaller applications (in terms of the number of actors) and 'tutorial-like' ones don't help me develop intuition about the overhead of message dispatch and a swift spot for granularity between a 'scala object' and a 'CORBA object'.
While almost certainly keeping a state of conversation with a client for example deserves an actor, in most real use cases it would involve conditional/parallel/alternative interactions modeled by many classes. This leaves the choice between treating actors as facades to quite complex services, similar to the justly retired EJB, or akin to smalltalk objects, firing messages between each other willy-nilly whenever communication can possibly be implemented in an asynchronous manner.
Apart from the overhead of message passing itself, there will also be overhead involved with lifecycle management, and I am wary of potential problems caused by chained-restarts of whole subtrees of actors due to exceptions or other errors in their root.
For the sake of this question we may assume that vast majority of the communication happens within a single machine and network crossing is insignificant.
I am not sure what you mean by an "overhead of message passing itself".
When network/serialisation is not involved then the overhead is negligible: one side pushes a message in a queue, another reads it from it.
Akka claims that it can go as fast as 50 millions messages per second on a single machine. This means that you wouldn't use actors just as façade for complex subsystems. You would rather model them as mush smaller "working units". They can be more complex compare to smalltalk objects when convenient. You could have, say, KafkaConsumerActor which would utilise internally other "normal" classes such as Connection, Configuration, etc., these don't have to be akka actors. But it is still small enough to be a simple working unit doing one simple thing (consuming a message and sending it somewhere).
50 millions a second is really a lot.
A memory footprint is also extremely small. Akka itself claims that you can have ~2.5 millions actors for just 1GB of heap. Compare to what a typical system does it is, indeed, nothing.
As for lifecycle, creating an actor is not much heavier than creating an class instance and a mailbox so I don't really expect it to be that significant.
Saying that, typically you don't have many actors in your system that would handle one message and die. Normally you spawn actors which live much longer. Like, an actor that calculates your mortgage repayments based on parameters you provide doesn't have any reason to die at all.
Also Akka makes it very simple to use actor pools (different kinds of them).
So performance here is very tweakable.
Last point is that you should compare Akka overhead in a context. For example, if your system is doing database queries, or serving/performing HTTP requests, or even doing significant IO of some sort, then probably overhead of these activities makes overhead of Akka so insignificant so you wouldn't even bother thinking about it. Like a roundtrip to the DB for 50 millis would be an equivalent of an overhead from ~2.5 millions akka messages. Does it matter?
So can you find an edge case scenario where Akka would force you to pay performance penalties? Probably. Akka is not a golden hammer (and nothing is).
But with all the above in mind you should think if it is Akka that is a performance bottleneck in your specific context or you are wasting time in micro-optimisation.

Concurrency overview

As Scala provides a great suite to deal with concurrency (Akka, parallel collections, futures and so on) it also leaves me a bit puzzled. Is there some kind of guide line when to use what? Some kind of best practices?
First of all, concurrency != parallelism. The latter can be employed for problems which you reason about essentially in a sequential manner, but which can be efficiently partitioned into chunks which can be independently processed (before being put together again in the end). For example, mapping and filtering a collection, that would be a scenario for parallel collections.
Some others have reasoned about actors versus futures. In short, actors are more OO in the sense that each actor can encapsulate its own internal state, they are more like black boxes. Also actor concurrency is nondeterministic, whereas dataflow and futures are deterministic. Actors are a natural choice when you want to distribute tasks across multiple computers. Actors can accept multiple types of messages, whereas futures allow function composition over one specific type. (This is simplified, as Akka now has typed channels, which I guess makes it more composable). Actors would be suitable for services which wait for requests, whereas futures can be thought of as lazy answers.
If you have multiple concurrent threads, software transactional memory (STM) is also a useful abstraction. STM doesn't manage threadpools or concurrent tasks by itself, but when combined with them, it handles mutable state in a safe manner.

Reactive services and scalability without Akka

I've read Reactive Manifesto for a couple of times and tried to wrap my head around all this reactive, async, non-blocking stuff. It's clear how to make scalable systems on top of Actors, but will i get the same effect, in term of scalability, asynchronous execution execution if i would actively use scala Future all over my code, every method would either accept or return a Future. Would such service be scalable and responsive?. Lets say that in this question i'm not much interested in event-driven and resilient part of the service.
This answer reflects my experience using Akka in Scala and its Actor and Future types. I don't consider myself an expert, yet, but I've done a few systems using these libraries and feel I am beginning to develop a sense for how to use them.
The choice of using an Actor vs. a Future is about the nature of the concurrency you require. Futures compose monadically and in DAG-structured graphs, making certain computational structures very elegantly composed. But they have severe limitations. Basically, the computation that is performed concurrently within a Future must be self-contained (or at most reference only immutable external state) or you have not solved the problem of inter-thread interference with all the attendant risks such as deadlock, race conditions, unpredictable behavior or data structure corruption.
When your computation involves long-lived, mutable state, an Actor encapsulates that securely preventing corruption and race conditions. On the other hand, actors are not composable, but they do provide a lot of flexibility in how you construct networks of interacting computations. This is true only providing you don't limit actor responses to always and only sending them (the response to a request) back to the actor from which the request was received. If you only send responses to the requesting actor, you're limited to a tree-structured pattern of inter-actor interaction.
In any real, non-trivial system, you're very likely to use both formalisms, Future and Actor.

Process work in parallel with non-threadsafe function in scala

I have a lot of work (thousands of jobs) for a Scala application to process. Each piece of work is the file name of a 100 MB file. To process each file, I need to use an extractor object that is not thread safe (I can have multiple copies, but copies are expensive, and I should not make one per job). What is the best way to complete this work in parallel in Scala?
You can wrap your extractor in an Actor and send each file name to the actor as a message. Since an instance of an actor will process only one message at a time, thread safety won't be an issue. If you want to use multiple extractors, just start multiple instances of the actor and balance between them (you could write another actor to act as a load balancer).
The extractor actor(s) can then send extracted files to other actors to do the rest of the processing in parallel.
Don't make 1000 jobs, but make 4x250 jobs (targeting 4 threads) and give one extractor to each batch. Inside each batch, work sequentially. This might not be optimal parallel-wise, since one batch might finish earlier but it is very easy to implement.
Probably the correct (but more complicated) solution would be to make a pool of extractors, where jobs take extractors from and put them back after finishing.
I would make a thread pool, where each thread has an instance of the extractor class, and instantiate just as many of these threads as it takes to saturate the system (based on CPU usage, IO bandwidth, memory bandwidth, network bandwidth, contention for other shared resources, etc.). Then use a thread-safe work queue that these threads can pull tasks from, process them, and iterate until the container is empty.
Mind you, there should be one or several libraries in just about any modern language that implements exactly this. In C++, it would be Intel's Threading Building Blocks. In Objective-C, it would be Grand Central Dispatch.
It depends: what's the relative amount of CPU consumed by the extractor for each job ?
If it is very small, you have a classic single-producer/multiple-consumer problem for which you can find lots of solution in different languages. For Scala, if you are reluctant to start using actors, you can still use the Java API (Runnable, Executors and BlockingQueue, are quite good).
If it is a substantial amount (more than 10%), you app will never scale with a multithread model (see Amdhal law). You may prefer to run several process (several JVM) to obtain thread safety, and thus eliminate the non-sequential part.
First question: how quick does the work need to be completed?
Second question: would this work be isolated to a single physical box or what are your upper bounds on computational resource.
Third question: does the work that needs doing to each individual "job" require blocking and is it serialised or could be partitioned into parallel packets of work?
Maybe think about a distributed model whereby you scale through designing with a mind to pushing out across multiple nodes from the first instance, actors, remoteref all that crap first...try and keep your logic simple and easy - so serialised. Don't just think in terms of a single box.
Most answers here seem to dwell on the intricacies of spawning thread pools and executors and all that stuff - which is fine, but be sure you have a handle on the real problem first, before you start complicating your life with lots of thinking around how you manage the synchronisation logic.
If a problem can be decomposed, then decompose it. Don't overcomplicate it for the sake of doing so - it leads to better engineered code and less sleepless nights.

How to limit concurrency when using actors in Scala?

I'm coming from Java, where I'd submit Runnables to an ExecutorService backed by a thread pool. It's very clear in Java how to set limits to the size of the thread pool.
I'm interested in using Scala actors, but I'm unclear on how to limit concurrency.
Let's just say, hypothetically, that I'm creating a web service which accepts "jobs". A job is submitted with POST requests, and I want my service to enqueue the job then immediately return 202 Accepted — i.e. the jobs are handled asynchronously.
If I'm using actors to process the jobs in the queue, how can I limit the number of simultaneous jobs that are processed?
I can think of a few different ways to approach this; I'm wondering if there's a community best practice, or at least, some clearly established approaches that are somewhat standard in the Scala world.
One approach I've thought of is having a single coordinator actor which would manage the job queue and the job-processing actors; I suppose it could use a simple int field to track how many jobs are currently being processed. I'm sure there'd be some gotchyas with that approach, however, such as making sure to track when an error occurs so as to decrement the number. That's why I'm wondering if Scala already provides a simpler or more encapsulated approach to this.
BTW I tried to ask this question a while ago but I asked it badly.
Thanks!
I'd really encourage you to have a look at Akka, an alternative Actor implementation for Scala.
http://www.akkasource.org
Akka already has a JAX-RS[1] integration and you could use that in concert with a LoadBalancer[2] to throttle how many actions can be done in parallell:
[1] http://doc.akkasource.org/rest
[2] http://github.com/jboner/akka/blob/master/akka-patterns/src/main/scala/Patterns.scala
You can override the system properties actors.maxPoolSize and actors.corePoolSize which limit the size of the actor thread pool and then throw as many jobs at the pool as your actors can handle. Why do you think you need to throttle your reactions?
You really have two problems here.
The first is keeping the thread pool used by actors under control. That can be done by setting the system property actors.maxPoolSize.
The second is runaway growth in the number of tasks that have been submitted to the pool. You may or may not be concerned with this one, however it is fully possible to trigger failure conditions such as out of memory errors and in some cases potentially more subtle problems by generating too many tasks too fast.
Each worker thread maintains a dequeue of tasks. The dequeue is implemented as an array that the worker thread will dynamically enlarge up to some maximum size. In 2.7.x the queue can grow itself quite large and I've seen that trigger out of memory errors when combined with lots of concurrent threads. The max dequeue size is smaller 2.8. The dequeue can also fill up.
Addressing this problem requires you control how many tasks you generate, which probably means some sort of coordinator as you've outlined. I've encountered this problem when the actors that initiate a kind of data processing pipeline are much faster than ones later in the pipeline. In order control the process I usually have the actors later in the chain ping back actors earlier in the chain every X messages, and have the ones earlier in the chain stop after X messages and wait for the ping back. You could also do it with a more centralized coordinator.