Event or polled based embedded MCU system architecture? - event-handling

I have prior experience in writing both event and poll based embedded systems (for tiny MCU's with no preemptive OS).
In an event based system, tasks usually receives events (messages) on a queue and handles them in turn.
In a polled based system, tasks polls status with a certain interval and responds to change.
Which architecture do you prefer? Can both co-exist?
UPDATE: POINTS MADE
POLL BASED
- Tight coupling related to timing aspects (#Lundin)
* Can co-exist alongside event system using queues (#embedded.kyle)
* Fine for smaller programs (#Lundin)
EVENT BASED
+ More flexible system in the long run (#embedded.kyle)
- RTOS edition adds complexity (#Lundin)
* Small programs = state-machine controlled (#Lundin)
* Can be implemented using queues and a "super-loop" (inside controller/main) (#embedded.kyle)
* Only true "events" are hw interrupts ones (#Lundin)
RELATED QUESTIONS
* Looking for a comparison of different scheduling algorithms for a Finite State Machine (#embedded.kyle)
RELATED INFO
* "Prefer Using Active Objects Instead of Naked Threads" (#Miro)
http://www.drdobbs.com/parallel/prefer-using-active-objects-instead-of-n/225700095
* "Use Threads Correctly = Isolation + Asynchronous Messages" (#Miro)
http://www.drdobbs.com/parallel/use-threads-correctly-isolation-asynch/215900465

There is really no such thing as "event-driven" on a bare bone MCU platform, despite what the buzzword-spitters are trying to tell you. The only kind of true events you can receive are hardware interrupts.
Depending on the nature of the application and its real time requirements, interrupts may or may not be suitable. Generally, it is far easier to achieve deterministic real time with a polling system. However, systems relying solely on polling are very hard to maintain, because you get tight coupling between all timing aspects.
Suppose you try to start up a LCD, which is slow. Instead of polling some timer repeatedly while burning CPU cycles in an empty loop, you would perhaps decide to receive some data over a bus in the meantime. And then you want to print the data received on the LCD. Such a design has created a tight coupling between the LCD startup time and the serial bus, and another tight coupling between the serial bus and the printing of data. From an object-oriented point-of-view these things are not related to each other at all. If you were to speed up the serial bus at some point in the future, then suddenly you could encounter LCD printing bugs, because it has not finished starting up when you try to print on it.
In a small program, it is perfectly fine to use polling like in the above example. But if the program has potential of growing, polling will make it very complex and the tight coupling will ultimately lead to many strange and fatal bugs.
On the other hand, multi-threading and RTOS adds quite a lot of extra complexity which in turn can lead to bugs as well. Where to draw the line isn't simple to determine.
Out of personal experience I'd say that any program smaller than 20-30k LOC will not benefit from scheduling and multitasking, beyond simple state machines. If the program gets larger than that, I'd consider a multitasking RTOS.
Also, low-end MCUs (8- and 16-bitters) are far from suitable to run an OS. If you find that you need an OS to handle complexity on a 8- or 16-bit platform, you probably picked the wrong MCU to begin with. I'd be sceptical against any attempts to introduce an OS on anything smaller than a 32-bitter.

Actually, event-driven programming and threads can be combined and the resulting pattern is widely known as "active objects" or "actors".
Active objects (actors) are encapsulated, event-driven state machines, which communicate with one another asynchronously by posting events to each other. Active objects process all events in their own thread of execution (at least conceptually, if a cooperative scheduler is used), so they avoid by design most concurrency hazards.
Actors and active objects are all the rage (again) in the general-purpose computing (you can search for Erlang, Scala, Akka). Herb Sutter has written a couple of good articles that explain the "active object" pattern: "Prefer Using Active Objects Instead of Naked Threads" (http://www.drdobbs.com/parallel/prefer-using-active-objects-instead-of-n/225700095) and "Use Threads Correctly = Isolation + Asynchronous Messages" (http://www.drdobbs.com/parallel/use-threads-correctly-isolation-asynch/215900465)
Here is what Herb says in the first of these articles:
"Using raw threads directly is trouble for a number of reasons ...
Active objects dramatically improve our ability to reason about our thread's code and operation by giving us higher-level abstractions and idioms that raise the semantic level of our program and let us express our intent more directly. As with all good patterns, we also get better vocabulary to talk about our design. Note that active objects aren't a novelty: UML and various libraries have provided support for active classes"
So, all this is really not new. But what's perhaps less known, especially in the embedded systems community, is that active objects are not only fully applicable to the embedded systems, but they are actually a perfect match for embedded and they are lighter than a traditional RTOS.
I've been using the event-driven active objects for over a decade now and have created the QP family of active object frameworks for embedded systems (see http://www.state-machine.com/). I would never go back to the polling "superloop" or the raw RTOS.

I prefer whichever architecture is best suited to the application at hand.
Both can co-exist in a multilevel queue architecture. One queue works on a poll basis running in the main loop. While another, most likely tasked with higher priority events, works by using interrupt based preemption.
See my answer to this SO question for a more detailed explanation and comparison of the different scheduling algorithms.

Related

The reason(s)/benefit(s) to use realtime operating system instead of while-loop on MCU

I'm working on a wheeled-robot platform. My team is implementing some algorithms on MCU to
keep getting sensors reading (sonar array, IR array, motor encoders, IMU)
receive user command (via a serial port connected to a tablet)
control actuators (motors) to execute user commands.
keep sending sensor readings to the tablet for more complicated algorithms.
We currently implement everything inside a global while-loop, while I know most of the other use-cases do the very same things with a real-time operating system.
Please tell me the benefits and reasons to use a real-time os instead of a simple while-loop.
Thanks.
The RTOS will provide priority based preemption. If your code is off parsing a serial command and executing it, it can't respond to anything else until it returns to your beastly loop. An RTOS will provide the abstractions needed for an instant context switch based on an interrupt event. Otherwise the worst-case latency of an event response is going to be that of the longest possible excursion out of the main loop, and sometimes you really do need long-running processes. Such as, for example, to update an LCD panel or respond to USB device enumeration. Preemption permits you to go off and do these things safe in the knowledge that a 16-bit timer running at the CPU clock isn't going to roll over several times before it's done. While for simple control jobs a loop is sufficient, the problem starting with it is when you get into something like USB device enumeration it's no longer practical and will need a full rewrite. By starting with a preemptive framework like an RTOS provides, you have a lot more future flexibility. But there's definitely a bit more up-front work, and definitely a learning curve.
"Real Time" OS ensures your task periodicity. If you want to read sensors data precisely at every 100msec, simple while loop will not guarantee that. On other hand, RTOS can easily take care of that.
RTOS gives you predictibility. An operation will be executed at given time and it will not be missed.
RTOS gives you Semaphores/Mutex so that your memory will not be corrupted or multiple sources will not access buffers.
RTOS provides message queues which can be useful for communication between tasks.
Yes, you can implement all these features in While loop, but then that's the advantage! You get everything ready and tested.
If your while loop works (i.e. it fulfills the real-time requirements of your system), and it's robust, maintainable, and somewhat extensible, then there probably is no benefit to using a real-time operating system.
But if your while-loop can't fulfill the real-time requirements or is overly complex or over-extended (i.e., any change requires further tuning and tweaking to restore real-time performance), then you should consider another architecture.
An RTOS architecture is one popular next step beyond the super-loop. An RTOS is basically a tool for managing software complexity. It allows you to divide a complex set of software requirements into multiple threads of execution. When done properly, each individual thread has a relatively simple set of requirements and becomes easier to implement. And thread prioritization can make it easier to fulfill the real-time requirements of the application. These are basically the benefits of employing an RTOS.
An RTOS is not a panacea, however. An RTOS increases the overall system complexity and opens you up to new types of bugs (such as deadlocks). It requires knowledge and experience to design and implement an RTOS based program effectively. Consider alternatives such as Multi-Rate Main Loop Tasking or an event-based state machine architecture (such as QP). These alternatives might be simpler, easier to understand, or more compatible with your way of designing software.
There are a couple huge advantage that a RTOS multitasker has:
Very good I/O performance: an RTOS can set a waiting thread ready as soon as an I/O action that it requested has completed, so latency in handling reads/writes/whatever can be low. A looped, polled design cannot respond to I/O completions until it gets round to checking the I/O status, (either directly or my polling some volatile flag set by an interrupt-handler).
Indpendent functionality: The ease of implementing isolated subsystems for serial comms, actuators etc. may well be one for you, as suggested in the other answers. It's very reassuring to know that any extra delay in, say some serial exchange, will not adversely affect timing elsewhere. You need to wait a second? No problem: sleep(1000) and you're done - no effect on the other subsystems:) It's the difference between 'no, I cannot add a network server, it would change all the timing and I would have to retest everything' and 'sure, there's plenty of CPU free, I already have the code from another job and I just need another thread to run it'.
There ae other advanatges that help offset the added annoyance of having to program a preemptive multitasker with its critical sections, semaphores and condvars.
Obviously, if your hardware has multiple cores, the RTOS helps - it is designed to share out available CPU execution cycles just like any other resource, and adding cores means more cycles.
In the end, though, it's the I/O performance and functional isolation that's the big win.
Some of the suggestions in other answers may help, either instead of, or together with, an RTOS. When controlling multiple I/O hardware, eg. sensors and actuators, an event-driven state-machine is a very good idea indeed. I often implement that by queueing all events into one thread, (producer-consumer queue), that has sole access to the state data and implements the state-machine, so serializing actions.
Whether the advantages are worth the candle is between you and your requirements:)
RTOS is not instead of while loop - it is while loop + tools which organize your tasks. How do they organize your tasks? Assign priorities to them, decide how much time each one have for a job and/or at what time it should start/end. RTOS also layers your software, i.e harwdare related stuff, application tasks, etc. Aside of that gives you data structures, containers, ready to use interfaces to handle common tasks so you don't have to implement your own i.e allocate some memory for you, lock access for some resources and so on.

Micro scheduler for real-time kernel in embedded C applications?

I am working with time-critical applications where the microsecond counts. I am interested to a more convenient way to develop my applications using a non bare-metal approach (some kind of framework or base foundation common to all my projects).
A considered real-time operating system such as RTX, Xenomai, Micrium or VXWorks are not really real-time under my terms (or under the terms of electronic engineers). So I prefer to talk about soft-real-time and hard-real-time applications. An hard-real-time application has an acceptable jitter less than 100 ns and a heat-beat of 100..500 microseconds (tick timer).
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick. Therefore the tasks take usually much more than one tick to complete and this is the case of most available operating systems or micro kernels.
For my applications a typical task has a duration of 10..100 microseconds, with few exceptions that can last for more than one tick. So any real-time operating system cannot not fulfill my requirements. That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs. I still want to struggle a bit and in my case I now realize I have to consider a new category of operating system that I never heard about (and that may not exist yet). Let's call this category nano-kernel or subtick-scheduler
In such dreamed kernels I would find:
2 types of tasks:
Preemptive tasks (that run in their own memory space)
Non-preemptive tasks (that run in the kernel space and must complete in less than one tick.
Deterministic kernel scheduler (fixed duration after the ISR to reach the theoretical zero second jitter)
Ability to run multiple tasks per tick
For a better understanding of what I am looking for I made this figure below that represents the two types or kernels. The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt. Task 1 was summoned with a maximum execution time value so it needs 2 ticks to complete. Task 2 is set with low priority, so it consumes the remaining time of each tick upon completion. Task 3 is non-preemptive so it operates on the kernel space which save some precious context switch time.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds. By analyzing my needs I land on different topology for my scheduler were multiple tasks can be executed under the same tick interrupt. Obviously I am not the only one with this kind of need and my problem might simply be a kind of X-Y problem. So said differently I am not really looking at what I am really looking for.
After this (pretty) long introduction I can formulate my question:
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
Sorry, but first of all, let me say that your entire post is completely wrong and shows complete lack of understanding how preemptive RTOS works.
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick.
This is completely wrong.
In reality, a tick frequency in RTOS determines only two things:
resolution of timeouts, sleeps and so on,
context switch due to round-robin scheduling (where two or more threads with the same priority are "runnable" at the same time for a long period of time.
During a single tick - which typically lasts 1-10ms, but you can usually configure that to be whatever you like - scheduler can do hundreds or thousands of context switches. Or none. When an event arrives and wakes up a thread with sufficiently high priority, context switch will happen immediately, not with the next tick. An event can be originated by the thread (posting a semaphore, sending a message to another thread, ...), interrupt (posting a semaphore, sending a message to a queue, ...) or by the scheduler (expired timeout or things like that).
There are also RTOSes with no system ticks - these are called "tickless". There you can have resolution of timeouts in the range of nanoseconds.
That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs.
Actually this is a reason why these "engineers" should read something instead of pretending to know everything and seeking "innovative" solutions to non-existing problems. This is completely wrong.
The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
This is not a feature of a RTOS, but the way you wrote your application - if a high priority task is constantly doing something, then lower priority tasks will NOT get any chance to run. But this is just because you assigned wrong priorities.
Unless you use cooperative RTOS, but if you have such high requirements, why would you do that?
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt.
This is exactly how EVERY preemptive RTOS works.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds.
Completely wrong. In every known RTOS it is not a problem to get a response time down to single microseconds (1-3us) with a chip that has clock in the range of 100MHz. So you actually can run "jobs" which are as short as 10us without too much overhead. You can even have "jobs" as short as 10ns, but then the overhead will be pretty high...
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
This pattern is called preemptive RTOS. Do note that threads in RTOS are NOT executed in "tick interrupt". They are executed in standard "thread" context, and tick interrupt is only used to switch context of one thread to another.
What you described in your post is a "cooperative" RTOS, which does NOT preempt threads. You use that in systems with extremely limited resources and with low timing requirements. In every other case you use preemptive RTOS, which is capable of handling the events immediately.

Is Scala's actors similar to Go's coroutines?

If I wanted to port a Go library that uses Goroutines, would Scala be a good choice because its inbox/akka framework is similar in nature to coroutines?
Nope, they're not. Goroutines are based on the theory of Communicating Sequential Processes, as specified by Tony Hoare in 1978. The idea is that there can be two processes or threads that act independently of one another but share a "channel," which one process/thread puts data into and the other process/thread consumes. The most prominent implementations you'll find are Go's channels and Clojure's core.async, but at this time they are limited to the current runtime and cannot be distributed, even between two runtimes on the same physical box.
CSP evolved to include a static, formal process algebra for proving the existence of deadlocks in code. This is a really nice feature, but neither Goroutines nor core.async currently support it. If and when they do, it will be extremely nice to know before running your code whether or not a deadlock is possible. However, CSP does not support fault tolerance in a meaningful way, so you as the developer have to figure out how to handle failure that can occur on both sides of channels, and such logic ends up getting strewn about all over the application.
Actors, as specified by Carl Hewitt in 1973, involve entities that have their own mailbox. They are asynchronous by nature, and have location transparency that spans runtimes and machines - if you have a reference (Akka) or PID (Erlang) of an actor, you can message it. This is also where some people find fault in Actor-based implementations, in that you have to have a reference to the other actor in order to send it a message, thus coupling the sender and receiver directly. In the CSP model, the channel is shared, and can be shared by multiple producers and consumers. In my experience, this has not been much of an issue. I like the idea of proxy references that mean my code is not littered with implementation details of how to send the message - I just send one, and wherever the actor is located, it receives it. If that node goes down and the actor is reincarnated elsewhere, it's theoretically transparent to me.
Actors have another very nice feature - fault tolerance. By organizing actors into a supervision hierarchy per the OTP specification devised in Erlang, you can build a domain of failure into your application. Just like value classes/DTOs/whatever you want to call them, you can model failure, how it should be handled and at what level of the hierarchy. This is very powerful, as you have very little failure handling capabilities inside of CSP.
Actors are also a concurrency paradigm, where the actor can have mutable state inside of it and a guarantee of no multithreaded access to the state, unless the developer building an actor-based system accidentally introduces it, for example by registering the Actor as a listener for a callback, or going asynchronous inside the actor via Futures.
Shameless plug - I'm writing a new book with the head of the Akka team, Roland Kuhn, called Reactive Design Patterns where we discuss all of this and more. Green threads, CSP, event loops, Iteratees, Reactive Extensions, Actors, Futures/Promises, etc. Expect to see a MEAP on Manning by early next month.
There are two questions here:
Is Scala a good choice to port goroutines?
This is an easy question, since Scala is a general purpose language, which is no worse or better than many others you can choose to "port goroutines".
There are of course many opinions on why Scala is better or worse as a language (e.g. here is mine), but these are just opinions, and don't let them stop you.
Since Scala is general purpose, it "pretty much" comes down to: everything you can do in language X, you can do in Scala. If it sounds too broad.. how about continuations in Java :)
Are Scala actors similar to goroutines?
The only similarity (aside the nitpicking) is they both have to do with concurrency and message passing. But that is where the similarity ends.
Since Jamie's answer gave a good overview of Scala actors, I'll focus more on Goroutines/core.async, but with some actor model intro.
Actors help things to be "worry free distributed"
Where a "worry free" piece is usually associated with terms such as: fault tolerance, resiliency, availability, etc..
Without going into grave details how actors work, in two simple terms actors have to do with:
Locality: each actor has an address/reference that other actors can use to send messages to
Behavior: a function that gets applied/called when the message arrives to an actor
Think "talking processes" where each process has a reference and a function that gets called when a message arrives.
There is much more to it of course (e.g. check out Erlang OTP, or akka docs), but the above two is a good start.
Where it gets interesting with actors is.. implementation. Two big ones, at the moment, are Erlang OTP and Scala AKKA. While they both aim to solve the same thing, there are some differences. Let's look at a couple:
I intentionally do not use lingo such as "referential transparency", "idempotence", etc.. they do no good besides causing confusion, so let's just talk about immutability [a can't change that concept]. Erlang as a language is opinionated, and it leans towards strong immutability, while in Scala it is too easy to make actors that change/mutate their state when a message is received. It is not recommended, but mutability in Scala is right there in front of you, and people do use it.
Another interesting point that Joe Armstrong talks about is the fact that Scala/AKKA is limited by the JVM which just wasn't really designed with "being distributed" in mind, while Erlang VM was. It has to do with many things such as: process isolation, per process vs. the whole VM garbage collection, class loading, process scheduling and others.
The point of the above is not to say that one is better than the other, but it's to show that purity of the actor model as a concept depends on its implementation.
Now to goroutines..
Goroutines help to reason about concurrency sequentially
As other answers already mentioned, goroutines take roots in Communicating Sequential Processes, which is a "formal language for describing patterns of interaction in concurrent systems", which by definition can mean pretty much anything :)
I am going to give examples based on core.async, since I know internals of it better than Goroutines. But core.async was built after the Goroutines/CSP model, so there should not be too many differences conceptually.
The main concurrency primitive in core.async/Goroutine is a channel. Think about a channel as a "queue on rocks". This channel is used to "pass" messages. Any process that would like to "participate in a game" creates or gets a reference to a channel and puts/takes (e.g. sends/receives) messages to/from it.
Free 24 hour Parking
Most of work that is done on channels usually happens inside a "Goroutine" or "go block", which "takes its body and examines it for any channel operations. It will turn the body into a state machine. Upon reaching any blocking operation, the state machine will be 'parked' and the actual thread of control will be released. This approach is similar to that used in C# async. When the blocking operation completes, the code will be resumed (on a thread-pool thread, or the sole thread in a JS VM)" (source).
It is a lot easier to convey with a visual. Here is what a blocking IO execution looks like:
You can see that threads mostly spend time waiting for work. Here is the same work but done via "Goroutine"/"go block" approach:
Here 2 threads did all the work, that 4 threads did in a blocking approach, while taking the same amount of time.
The kicker in above description is: "threads are parked" when they have no work, which means, their state gets "offloaded" to a state machine, and the actual live JVM thread is free to do other work (source for a great visual)
note: in core.async, channel can be used outside of "go block"s, which will be backed by a JVM thread without parking ability: e.g. if it blocks, it blocks the real thread.
Power of a Go Channel
Another huge thing in "Goroutines"/"go blocks" is operations that can be performed on a channel. For example, a timeout channel can be created, which will close in X milliseconds. Or select/alt! function that, when used in conjunction with many channels, works like a "are you ready" polling mechanism across different channels. Think about it as a socket selector in non blocking IO. Here is an example of using timeout channel and alt! together:
(defn race [q]
(searching [:.yahoo :.google :.bing])
(let [t (timeout timeout-ms)
start (now)]
(go
(alt!
(GET (str "/yahoo?q=" q)) ([v] (winner :.yahoo v (took start)))
(GET (str "/bing?q=" q)) ([v] (winner :.bing v (took start)))
(GET (str "/google?q=" q)) ([v] (winner :.google v (took start)))
t ([v] (show-timeout timeout-ms))))))
This code snippet is taken from wracer, where it sends the same request to all three: Yahoo, Bing and Google, and returns a result from the fastest one, or times out (returns a timeout message) if none returned within a given time. Clojure may not be your first language, but you can't disagree on how sequential this implementation of concurrency looks and feels.
You can also merge/fan-in/fan-out data from/to many channels, map/reduce/filter/... channels data and more. Channels are also first class citizens: you can pass a channel to a channel..
Go UI Go!
Since core.async "go blocks" has this ability to "park" execution state, and have a very sequential "look and feel" when dealing with concurrency, how about JavaScript? There is no concurrency in JavaScript, since there is only one thread, right? And the way concurrency is mimicked is via 1024 callbacks.
But it does not have to be this way. The above example from wracer is in fact written in ClojureScript that compiles down to JavaScript. Yes, it will work on the server with many threads and/or in a browser: the code can stay the same.
Goroutines vs. core.async
Again, a couple of implementation differences [there are more] to underline the fact that theoretical concept is not exactly one to one in practice:
In Go, a channel is typed, in core.async it is not: e.g. in core.async you can put messages of any type on the same channel.
In Go, you can put mutable things on a channel. It is not recommended, but you can. In core.async, by Clojure design, all data structures are immutable, hence data inside channels feels a lot safer for its wellbeing.
So what's the verdict?
I hope the above shed some light on differences between the actor model and CSP.
Not to cause a flame war, but to give you yet another perspective of let's say Rich Hickey:
"I remain unenthusiastic about actors. They still couple the producer with the consumer. Yes, one can emulate or implement certain kinds of queues with actors (and, notably, people often do), but since any actor mechanism already incorporates a queue, it seems evident that queues are more primitive. It should be noted that Clojure's mechanisms for concurrent use of state remain viable, and channels are oriented towards the flow aspects of a system."(source)
However, in practice, Whatsapp is based on Erlang OTP, and it seemed to sell pretty well.
Another interesting quote is from Rob Pike:
"Buffered sends are not confirmed to the sender and can take arbitrarily long. Buffered channels and goroutines are very close to the actor model.
The real difference between the actor model and Go is that channels are first-class citizens. Also important: they are indirect, like file descriptors rather than file names, permitting styles of concurrency that are not as easily expressed in the actor model. There are also cases in which the reverse is true; I am not making a value judgement. In theory the models are equivalent."(source)
Moving some of my comments to an answer. It was getting too long :D (Not to take away from jamie and tolitius's posts; they're both very useful answers.)
It isn't quite true that you could do the exact same things that you do with goroutines in Akka. Go channels are often used as synchronization points. You cannot reproduce that directly in Akka. In Akka, post-sync processing has to be moved into a separate handler ("strewn" in jamie's words :D). I'd say the design patterns are different. You can kick off a goroutine with a chan, do some stuff, and then <- to wait for it to finish before moving on. Akka has a less-powerful form of this with ask, but ask isn't really the Akka way IMO.
Chans are also typed, while mailboxes are not. That's a big deal IMO, and it's pretty shocking for a Scala-based system. I understand that become is hard to implement with typed messages, but maybe that indicates that become isn't very Scala-like. I could say that about Akka generally. It often feels like its own thing that happens to run on Scala. Goroutines are a key reason Go exists.
Don't get me wrong; I like the actor model a lot, and I generally like Akka and find it pleasant to work in. I also generally like Go (I find Scala beautiful, while I find Go merely useful; but it is quite useful).
But fault tolerance is really the point of Akka IMO. You happen to get concurrency with that. Concurrency is the heart of goroutines. Fault-tolerance is a separate thing in Go, delegated to defer and recover, which can be used to implement quite a bit of fault tolerance. Akka's fault tolerance is more formal and feature-rich, but it can also be a bit more complicated.
All said, despite having some passing similarities, Akka is not a superset of Go, and they have significant divergence in features. Akka and Go are quite different in how they encourage you to approach problems, and things that are easy in one, are awkward, impractical, or at least non-idiomatic in the other. And that's the key differentiators in any system.
So bringing it back to your actual question: I would strongly recommend rethinking the Go interface before bringing it to Scala or Akka (which are also quite different things IMO). Make sure you're doing it the way your target environment means to do things. A straight port of a complicated Go library is likely to not fit in well with either environment.
These are all great and thorough answers. But for a simple way to look at it, here is my view. Goroutines are a simple abstraction of Actors. Actors are just a more specific use-case of Goroutines.
You could implement Actors using Goroutines by creating the Goroutine aside a Channel. By deciding that the channel is 'owned' by that Goroutine you're saying that only that Goroutine will consume from it. Your Goroutine simply runs an inbox-message-matching loop on that Channel. You can then simply pass the Channel around as the 'address' of your "Actor" (Goroutine).
But as Goroutines are an abstraction, a more general design than actors, Goroutines can be used for far more tasks and designs than Actors.
A trade-off though, is that since Actors are a more specific case, implementations of actors like Erlang can optimize them better (rail recursion on the inbox loop) and can provide other built-in features more easily (multi process and machine actors).
can we say that in Actor Model, the addressable entity is the Actor, the recipient of message. whereas in Go channels, the addressable entity is the channel, the pipe in which message flows.
in Go channel, you send message to the channel, and any number of recipients can be listening, and one of them will receive the message.
in Actor only one actor to whose actor-ref you send the message, will receive the message.

What is Event Driven Concurrency?

I am starting to learn Scala and functional programming. I was reading the book !Programming scala: Tackle Multi-Core Complexity on the Java Virtual Machine". Upon the first chapter I've seen the word Event-Driven concurrency and Actor model. Before I continue reading this book I want to have an idea about Event-Driven concurrency or Actor Model.
What is Event-Driven concurrency, and how is it related to Actor Model?
An Event Driven programming model involves registering code to be run when a given event fires. An example is, instead of calling a method that returns some data from a database:
val user = db.getUser(1)
println(user.name)
You could instead register a callback to be run when the data is ready:
db.getUser(1, u => println(u.name))
In the first example, no concurrency was happening; The current thread would block until db.getUser(1) returned data from the database. In the second example db.getUser would return immediately and carry on executing the next code in the program. In parallel to this, the callback u => println(u.name) will be executed at some point in the future.
Some people prefer the second approach as it doesn't mean memory hungry Threads are needlessly sat around waiting for slow I/O to return.
The Actor Model is an example of how Event-Driven concepts can be used to help the programmer easily write concurrent programs.
From a super high level, Actors are objects that define a series of Event Driven message handlers that get fired when the Actor receives messages. In Akka, each instance of an Actor is single Threaded, however when many of these Actors are put together they create a system with concurrency.
For example, Actor A could send messages to Actor B and C in parallel. Actor B and C could fire messages back to Actor A. Actor A would have message handlers to receive these messages and behave as desired.
To learn more about the Actor model I would recommend reading the Akka documentation. It is really well written: http://doc.akka.io/docs/akka/2.1.4/
There is also lot's of good documentation around the web about Event Driven Concurrency that us much more detailed than what I've written here. http://berb.github.io/diploma-thesis/original/055_events.html
Theon's answer provides a good modern overview. I'd like to add some historical perspective.
Tony Hoare and Robert Milner both developed mathematical algebra for analysing concurrent systems (Communicating Sequential Processes, CSP, and Communicating Concurrent Systems, CCS). Both of these look like heavy mathematics to most of us but the practical application is relatively straightforward. CSP led directly to the Occam programming language amongst others, with Go being the newest example. CCS led to Pi calculus and the mobility of communicating channel ends, a feature that is part of Go and was added to Occam in the last decade or so.
CSP models concurrency purely by considering automomous entities ('processes', v.lightweight things like green threads) interacting simply by event exchange. The medium for passing events is along channels. Processes may have to deal with several inputs or outputs and they do this by selecting the event that is ready first. The events usually carry data from the sender to the receiver.
A principle feature of the CSP model is that a pair of processes engage in communication only when both are ready - in practical terms this leads to what is usually called 'synchronous' communication. However, the actual implementations (Go, Occam, Akka) allow channels to be buffered (the normal state in Akka) so that the lock-step exchange of events is often actually decoupled instead.
So in summary, an event-driven CSP-based system is really a data-flow network of processes connected by channels.
Besides the CSP interpretation of event-driven, there have been others. An important example is the 'event-wheel' approach, once popular for modelling concurrent systems whilst actually having a single processing thread. Such systems handle events by putting them into a processing queue and dealing with them due course, usually via a callback. Java Swing's event processing engine is a good example. There were others, e.g. for time-based simulation engines. One might think of the Javascript / NodeJS model as fitting into this category as well.
So in summary, an event-wheel was a way to express concurrency but without parallelism.
The irony of this is that the two approaches I've described above are both described as event driven but what they mean by event driven is different in each case. In one case, hardware-like entities are wired together; in the other, almost all actions are executed by callbacks. The CSP approach claims to be scalable because it's fully composable; it's naturally adept at parallel execution also. If there are any reasons to favour one over the other, these are probably it.
To understand the answer to this you have to look at event concurrency from the OS layer up. First you start with threads which are the smallest section of code that can be run by the OS and eventually deal with I/O, timing and other kinds of events.
The OS groups threads into a process in which they share the same memory, protection and security permissions. Above that layer you have user programs which typically make I/O requests that are handled by user libraries.
The I/O libraries handle these requests in one of two ways. Unix-like systems use a "reactor" model in which the library registers I/O handlers for all the different types of I/O and events in the system. These handlers are activated when I/O is ready on a specific device. Windows-like systems use an I/O completion model in which I/O requests are made and a callback is triggered when the request is complete.
Both of these models require a significant amount of overhead to manage overall program state if you were to use them directly. However some programming tasks (web apps / services) lend themselves to a seemingly more direct implementation if you use an event model directly, but you still need to manage all of that program state. In order to track program logic across dispatches of several related events you have to manually track state and pass it around to the callbacks. This tracking structure is usually called a state context or baton. As you might imagine passing batons around all over the place to numerous seemingly unrelated handlers makes for some extremely hard to read and spaghetti-like code. It's also a pain to write and debug -- especially when you're trying to handle the synchronization of various concurrent paths of execution. You start getting into Futures and then the code becomes really difficult to read.
One well-known event processing library is call libuv. It's a portable event loop that integrates Unix's reactor model with Windows' completion model into a single model usually called a "proactor". Its the event handler that drives NodeJS.
Which brings us to communicating sequential processes.
https://en.wikipedia.org/wiki/Communicating_sequential_processes
Rather than writing asynchronous I/O dispatch and synchronization code using one or more concurrency models (and their often competing conventions), we flip the problem on its head. We use a "coroutine" which looks like normal sequential code.
A simple example is a coroutine that receives a single byte over an event channel from another coroutine that sends a single byte. This effectively synchronizes I/O producer and consumer because the writer/sender has to wait for a reader/receiver and vice-versa. While either process is waiting they explicitly yield execution to other processes. When a coroutine yields, its scoped program state is saved on a stack frame thus saving you from the confusion of managing multi-layered baton state in an event loop.
Using applications built on these event channels we can construct arbitrary, reusable, concurrent logic and the algorithms no longer look like spaghetti code. In pure CSP systems if you write to a channel and there is no reader, you will be blocked. The channel endpoints are known via handles internally to the program.
Actor systems are different in a couple of ways. First, the endpoints are the actor threads and they are named and known external to the mainline program. The second difference is that sends and receives on these channels are buffered. In other words if you send a message to an actor and there isn't one listening or its busy you aren't blocked until one reads from their input channel. Other differences exist like one actor can publish to two different actors concurrently.
As you might guess Actor systems can easily be built from CSP systems. There are other details like waiting for specific event patterns and selecting from them, but that's the basics.
I hope that clarifies things a bit.
Other constructs can be built from these ideas. Various programming systems (Go, Erlang, etc) include CSP implementations within them. Operating systems like Inferno and Node9 use CSPs and Channels as the basis of their distributed computing model.
Go: https://en.wikipedia.org/wiki/Go_(programming_language)
Erlang: https://en.wikipedia.org/wiki/Erlang_(programming_language)
Inferno: https://en.wikipedia.org/wiki/Inferno_(operating_system)
Node9: https://github.com/jvburnes/node9

How do Real Time Operating Systems work?

I mean how and why are realtime OSes able to meet deadlines without ever missing them? Or is this just a myth (that they do not miss deadlines)? How are they different from any regular OS and what prevents a regular OS from being an RTOS?
Meeting deadlines is a function of the application you write. The RTOS simply provides facilities that help you with meeting deadlines. You could also program on "bare metal" (w/o a RTOS) in a big main loop and meet you deadlines.
Also keep in mind that unlike a more general purpose OS, an RTOS has a very limited set of tasks and processes running.
Some of the facilities an RTOS provide:
Priority-based Scheduler
System Clock interrupt routine
Deterministic behavior
Priority-based Scheduler
Most RTOS have between 32 and 256 possible priorities for individual tasks/processes. The scheduler will run the task with the highest priority. When a running task gives up the CPU, the next highest priority task runs, and so on...
The highest priority task in the system will have the CPU until:
it runs to completion (i.e. it voluntarily give up the CPU)
a higher priority task is made ready, in which case the original task is pre-empted by the new (higher priority) task.
As a developer, it is your job to assign the task priorities such that your deadlines will be met.
System Clock Interrupt routines
The RTOS will typically provide some sort of system clock (anywhere from 500 uS to 100ms) that allows you to perform time-sensitive operations.
If you have a 1ms system clock, and you need to do a task every 50ms, there is usually an API that allows you to say "In 50ms, wake me up". At that point, the task would be sleeping until the RTOS wakes it up.
Note that just being woken up does not insure you will run exactly at that time. It depends on the priority. If a task with a higher priority is currently running, you could be delayed.
Deterministic Behavior
The RTOS goes to great length to ensure that whether you have 10 tasks, or 100 tasks, it does not take any longer to switch context, determine what the next highest priority task is, etc...
In general, the RTOS operation tries to be O(1).
One of the prime areas for deterministic behavior in an RTOS is the interrupt handling. When an interrupt line is signaled, the RTOS immediately switches to the correct Interrupt Service Routine and handles the interrupt without delay (regardless of the priority of any task currently running).
Note that most hardware-specific ISRs would be written by the developers on the project. The RTOS might already provide ISRs for serial ports, system clock, maybe networking hardware but anything specialized (pacemaker signals, actuators, etc...) would not be part of the RTOS.
This is a gross generalization and as with everything else, there is a large variety of RTOS implementations. Some RTOS do things differently, but the description above should be applicable to a large portion of existing RTOSes.
In RTOSes the most critical parameters which should be taken care of are lower latencies and time determinism. Which it pleasantly does by following certain policies and tricks.
Whereas in GPOSes, along with acceptable latencies the critical parameters is high throughput. you cannot count on GPOS for time determinism.
RTOSes have tasks which are much lighter than processes/threads in GPOS.
It is not that they are able to meet deadlines, it is rather that they have deadlines fixed whereas in a regular OS there is no such deadline.
In a regular OS the task scheduler is not really strict. That is the processor will execute so many instructions per second, but it may occasionally not do so. For example a task might be pre-empted to allow a higher priority one to execute (and may be for longer time). In RTOS the processor will always execute the same number of tasks.
Additionally there is usually a time limit for tasks to completed after which a failure is reported. This does not happen in regular OS.
Obviously there is lot more detail to explain, but the above are two of the important design aspects that are used in RTOS.
Your RTOS is designed in such a way that it can guarantee timings for important events, like hardware interrupt handling and waking up sleeping processes exactly when they need to be.
This exact timing allows the programmer to be sure that his (say) pacemaker is going to output a pulse exactly when it needs to, not a few tens of milliseconds later because the OS was busy with another inefficient task.
It's usually a much simpler OS than a fully-fledged Linux or Windows, simply because it's easier to analyse and predict the behaviour of simple code. There is nothing stopping a fully-fledged OS like Linux being used in a RTOS environment, and it has RTOS extensions. Because of the complexity of the code base it will not be able to guarantee its timings down to as small-a scale as a smaller OS.
The RTOS scheduler is also more strict than a general purpose scheduler. It's important to know the scheduler isn't going to change your task priority because you've been running a long time and don't have any interactive users. Most OS would reduce internal the priority of this type of process to favour short-term interactive programs where the interface should not be seen to lag.
You might find it helpful to read the source of a typical RTOS. There are several open-source examples out there, and the following yielded links in a little bit of quick searching:
FreeRTOS
eCos
A commercial RTOS that is well documented, available in source code form, and easy to work with is µC/OS-II. It has a very permissive license for educational use, and (a mildly out of date version of) its source can be had bound into a book describing its theory of operation using the actual implementation as example code. The book is MicroC OS II: The Real Time Kernel by Jean Labrosse.
I have used µC/OS-II in several projects over the years, and can recommend it.
"Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time."
This is actually correct. The RTOS will have a system tick defined by the architecture, say 10 millisec., with all tasks (threads) both designed and measured to complete within specific times. For example in processing real time audio data, where the audio sample rate is 48kHz, there is a known amount of time (in milliseconds) at which the prebuffer will become empty for any downstream task which is processing the data. Therefore using the RTOS requires correct sizing of the buffers, estimating and measuring how long this takes, and measuring the latencies between all software layers in the system. Then the deadlines can be met. Otherwise the applications will miss the deadlines. This requires analysis of the worst-case data processing throughout the entire stack, and once the worst-case is known, the system can be designed for, say, 95% processing time with 5% idle time (this processing may not ever occur in any real usage, because worst-case data processing may not be an allowed state within all layers at any single moment in time).
Example timing diagrams for the design of a real time operating system network app are in this article at EE Times,
PRODUCT HOW-TO: Improving real-time voice quality in a VoIP-based telephony design
http://www.eetimes.com/design/embedded/4007619/PRODUCT-HOW-TO-Improving-real-time-voice-quality-in-a-VoIP-based-telephony-design
I haven't used an RTOS, but I think this is how they work.
There's a difference between "hard real time" and "soft real time". You can write real-time applications on a non-RTOS like Windows, but they're 'soft' real-time:
As an application, I might have a thread or timer which I ask the O/S to run 10 times per second ... and maybe the O/S will do that, most of the time, but there's no guarantee that it will always be able to ... this lack of guarantee is why it's called 'soft'. The reason why the O/S might not be able to is that a different thread might be keeping the system busy doing something else. As an application, I can boost my thread priority to for example HIGH_PRIORITY_CLASS, but even if I do this the O/S still has no API which I can use to request a guarantee that I'll be run at certain times.
A 'hard' real-time O/S does (I imagine) have APIs which let me request guaranteed execution slices. The reason why the RTOS can make such guarantees is that it's willing to abend threads which take more time than expected / than they're allowed.
What is important is realtime applications, not realtime OS. Usually realtime applications are predictable: many tests, inspections, WCET analysis, proofs, ... have been performed which show that deadlines are met in any specified situations.
It happens that RTOSes help doing this work (building the application and verifying its RT constraints). But I've seen realtime applications running on standard Linux, relying more on hardware horsepower than on OS design.
... well ...
A real-time operating system tries to be deterministic and meet deadlines, but it all depends on the way you write your application. You can make a RTOS very non real-time if you don't know how to write "proper" code.
Even if you know how to write proper code:
It's more about trying to be deterministic than being fast.
When we talk about determinism it's
1) event determinism
For each set of inputs the next states and outputs of a system are known
2) temporal determinism
… also the response time for each set of outputs is known
This means that if you have asynchronous events like interrupts your system is strictly speaking not anymore temporal deterministic. (and most systems use interrupts)
If you really want to be deterministic poll everything.
... but maybe it's not necessary to be 100% deterministic
The textbook/interview answer is "deterministic pre-emption". The system is guaranteed to transfer control within a bounded period of time if a higher priority process is ready to run (in the ready queue) or an interrupt is asserted (typically input external to the CPU/MCU).
They actually don't guarantee meeting deadlines; what they do that makes them truly RTOS is to provide the means to recognize and deal with deadline overruns. 'Hard' RT systems generally are those where missing a deadline is disastrous and some kind of shutdown is required, whereas a 'soft' RT system is one where continuing with degraded functionality makes sense. Either way an RTOS permits you to define responses to such overruns. Non RT OS's don't even detect overruns.
Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time.
Additionally your kernel would allocate specific amounts of time to each task, in an attempt to guarantee that certain things happened at certain times.
Note that this is not an easy task to do however. Imagine things like virtual function calls, in OO it's very difficult to determine these things. Also an RTOS must be carefully coded with regard to priority, it may require that a high priority task is given the CPU within x milliseconds, which may be difficult to do depending on how your scheduler works.