I am trying to implement functionality for RSU message forwarding to neighboring RSUs upon receiving a message from a car.
When looking at the RSU node, there is only a veinsradioIn gate which forwards into the NIC radioIn gate. At the RSU application type implementation side (DemoBaseApplLayer), I see that the handleLowerMsg function will cast the message among the message option types (bsm, wsa, wsm). I do not see something for sending a message. Also, I do not see a veinsradioOut gate in the RSU module.
How can I send messages from the RSU? And to take it a step further, how can I distinguish a message that is sent from an RSU compared to a node (car)?
The DemoBaseApplLayer extends another class. In that class, there are the functions to send down and send down with delay.
Related
If I want to send a message and handle the response somewhere in the code, what is the API? What components do I need? How do I construct them or get handles to them? What methods do I call? How do I add new message types?
Here is a sequence diagram I made for the messages exchanged as part of downloading candidate transaction sets referenced by proposals:
To send a message, a component needs a PeerImp object (generally held via shared_ptr<Peer>) on which it calls the send(shared_ptr<Message>) method. There is only one generic implementation of send, and it handles every protocol buffer message type. This call returns void (i.e. no request object).
When a message is received from a peer, the onMessage(MessageType) method for that message type is called. There is a different overload of onMessage for each message type.
Consider when you write code for HTTP. A popular idiom in JavaScript looks like this:
const response = await http.get(url, params)
There are some important differences between this pattern and the one for RTXP (the official name of our message protocol):
HTTP has an association between request and response. RTXP generally does not have this association, but in one notable example it does. TMGetLedger is a request message type, and TMLedgerData is its response message type. They both have a requestCookie field to associate a response with its request. The request generates a (random?) identifier for its “request cookie”, and the response copies that cookie.
With HTTP, the code that sends the request passes a handler expecting the response. Generally, the response handler is different for each place in the code that sends a request. Not so with RTXP. Instead, each request message type typically corresponds to exactly one response message type, and each message of a given type has the same exact handler. That means each place in the code that sends a request of the same message type uses the same response handler. I suspect that:
most message types are sent from exactly one place in the code
when one message type is sent from multiple places, then that message has an enumeration field to distinguish them (with “type” in its name)
each message type was designed for exactly the place in the code that needed to send it, which makes them hard to reuse
Most request message types are different from their response message types. The one notable exception is TMGetObjectByHash which has a Boolean query field that distinguishes a request (true) from a response (false).
There is some room for uniform handling of each message type:
A message is generally expected to be independently verifiable. If a response says it has the header for ledger ABCD, then the handler expects it can hash the header to get the digest ABCD.
A response is generally expected to correspond to a request.
If these expectations are violated, the peer that sent the message is penalized. Our server tracks a “fee” for each peer that measures its reliability. Bad messages are “charged” various fees based on the kind of violation. These fees only exist on the server. They have no bearing on the ledger.
PeerImp objects are obtained from the Overlay object. There is exactly one Overlay per Application, obtained by calling its overlay() method.
It is a general question about developing with akka actor system.
I know, it sacrifices static type checking for greater flexibility, that is not the problem. Java does the same thing all the way.
But I'd like at least to check compatibility of ActorRefs dynamically. I searched for some method like actorRef.asInstanceOf[ActorType]. Such method should provide validation for an actorRef passed through messages. And it would allow safe application development. But I've found no method to do any kind of type check. Its even impossible to check if an actorRef correspond to given Props.
How this task typically solved in akka application? Are there any third-party tools for dynamic checks?
The purpose of ActorRef is to completely abstract the recipient. Sending a message to it provides absolutely no guarantees about a response or even suitability of the message being sent. The recipient could drop the message, route it, stash it or handle it. Any contract about handling and causing possible response messages to be emitted is entirely an informal agreement.
Now, that sounds like a lot to give up in a statically typed environment, but it provides a programming model that brings its own slew of advantages which by their design require that you are sending and receiving messages with the assumption that the messages will be handled but without any knowledge where or when they will be handled.
Regarding how this task is typically solved in akka applications is by configuration and/discovery. The contract of acceptable Messages is usually placed into a Protocol object, while the valid recipient for those Messages is either injected into the calling Actor at creation or queryable via some DiscoveryProtocol (itself hidden behind an ActorRef)
Let's say you have a UserRepository you want to query, you would create protocol like this:
case class User(id: Int, ... )
object UserRepositoryProtocol {
case class GetUser(userId: Int)
}
Further, let's assume that the ActorRef of UserRepository was not injected, but because it is just one of many services your actor might use has to be discovered via a general Discovery service:
object DiscoveryProtocol {
case class Discover(type: String)
case class Discovered(type: String, ref: ActorRef)
}
Now you can fetch a user like this:
(discoveryRef ? Discover("UserRepository")).flatMap {
case Discovered("UserRepository",repository) =>
(repository ? GetUser(id)).map {
case user:User => // do something with the user
}
}
The above condenses discovery and calls into a chain of ask operations. Chances are you would want to cache the discovered ref and or hand off the retrieved user to some other Actor that's doing the work, breaking each '?' into a ! and a matching receive in the same or different actor.
This last point illustrates the power of the actor model. In a traditional request => response model, the requestor and recipient of the response would have to be the same just by virtue of function signatures, but in the actor model, you can send from one actor, spawn a worker that will handle the response, etc.
Assume that an actor is not only encapsulated behind an actor ref, but that the physical location of an actor is also unknown. An actor can be running on another physical server or VM. You can't call instanceOf on an object in another VM - how can you expect to get the class of an actor then?
Now, when building, I would recommend you consider that all actors are remote via Akka's location transparency. (http://doc.akka.io/docs/akka/snapshot/general/remoting.html) If you assume all actors are remote, suddenly you'll think about your designs a little differently. Think of Akka as a Distribution Toolkit - that is its primary benefit!
If you're trying to reflect on actors during runtime, then there is probably something wrong with your design. Instead, you need to know what messages actors can accept and respond to.
If you really want to know what an actor can and can't do, then you could think of modelling some sort of "Accepts" method where an actor would reply with the current version of the described API that the actor implements for example - in this way your client and server can talk back and forth about what capabilities etc are supported dynamically during runtime.
I hope that contributes something to the discussion - just remember to always think of an actor as something that's running somewhere else and you'll design appropriately. The benefit of doing so is that you'll be able to scale out your applications with very minimal effort if your user base unexpectedly explodes!
I'm using the Play Framework with scala. I'm new to scala, akka, and play.
This is my actor system. I'm not sure I'm doing this right, but I have 2 routers. 1 for Actor A, and 1 for Actor B:
val system = ActorSystem("ActionSystem")
val actorARouter = system.actorOf(Props[ActionParser].withRouter(
SmallestMailboxRouter(Runtime.getRuntime.availableProcessors())), name = "actorARouter")
val actorBRouter = system.actorOf(Props[ActionDispatcher].withRouter(
SmallestMailboxRouter(Runtime.getRuntime.availableProcessors())), name = "actorBRouter")
This is the current setup that I have:
The play framework provides a Controller for me that receives a http rest call with some json. Whenever the Controller receives a rest call, I do an ask sends the json to a router for Actor A. Here is what that looks like:
(actorARouter ? request.body.asJson.get).map {
case m: controllers.HttpMessages.OK => Ok(m.body)
case m: controllers.HttpMessages.HttpResponse => Status(m.status)(m.body)
}
Actor A then parses the json into a Seq of objects and then sends them via an ask to Actor B. Actor B is supposed to eventually process those by sending them to other actors, but for now is just returning generic responses.
The generic responses are being received back by ActorA via the future, then being parsed to JSON and then returned to the Controller via an OK response... or at least that's what is supposed to happen.
What's happening:
So what's happening is the controller sends to ActorA, ActorA sends to ActorB. ActorB sends generic responses to ActorA. ActorA parses generic responses into JSON and tries to do sender ! OK(json) but I get a message in the console saying it wasn't delivered as it's a "dead letter". when I debug into it when I look at sender, sender is a reference to the actor akka://ActionSystem/deadLetters
My questions:
Obviously I'm doing something wrong. Maybe I shouldn't be chaining these actors responses together like this. Again, I mentioned I only had plans to further this by having ActorB send out requests to other actors.
When I do an ask in an actor, that doesn't hog that thread and stop it from processing other messages while it's waiting for a response does it?
EDIT:
I found out I can save a reference to the sender for later use, and then send to that, and that seems to fix the dead letter problem. But I'm still very uncertain if this is the right way to be doing things. It feels like every time I'm adding another layer of actors 10's of milliseconds are being added onto my response time. Perhaps that's due to other factors though.
Without looking at your code, I cannot really comment on what caused the dead letter, from your edit I guess you closed over sender() instead of assigning it to a variable and closing over that.
To respond to your questions:
It is much easier to construct message flows with actors if you only use fire-and-forget messages. The ask Pattern is useful in some cases, but most of the time you should try to avoid it. What you can do instead is to pass the original sender along through your actors by using forward instead of tell. This way a response can be generated by the last actor in your message flow. The first actor only needs the code to handle the response, and does not need to care about generating the response. Nice separation of concerns right there. If you need to aggregate several responses in order to send out a single response afterwards, you can also use a temporary actor that all other actors will send their response to, and that knows the orignal sender. Temporary actors need to be stopped after doing their work.
As far as I know the ask pattern is asynchronous and uses temporary actors internally. However, if you wait for the result of the future in your actor, that will block that actor, and it will not be able to process further messages. A nice method to use the ask pattern is in combination with the pipeTo Pattern, which you can use to send the result of ask to an actor (usally self)
I have worked with network programming before. But this is my first foray into netlink sockets.
I have chosen to study the 'connector' type of netlink sockets. As with any kernel component, it has a user counterpart as well. The linux kernel has a sample program called ucon.c which can be used to build userspace programs based on the aforementioned connector netlink sockets.
So here I wish to pin-point parts of the program that I want to confirm my understanding of and of parts of the program that I do not follow the logic of. Enough talking. Here we go. Please correct me wherever I go astray.
As far as I have understood, netlink sockets are a IPC method used to connect processes on the same machine and hence process ID is used as an identifier. And since netlink messages can be ideally multicast, another identifier that is needed by the netlink socket is the message group. All components that are connected to the same message group are in fact related. So while in case of IPv4, we use a sockaddr_in in place of the sockaddr, here we use a sockaddr_nl which contains the above mentioned identifiers.
Now, since we are not going to use the TCP/IP stack of the kernel, in case of netlink messages, netlink packets can be considered to be raw (please correct me here if I am wrong). Hence the only encapsulation that the netlink packet goes through is the netlink message header defined as nlmsghdr.
Now coming on to our program ucon, main() first creates a NETLINK family socket with the connector protocol. Then it fills up the aforementioned netlink socketaddress structure with the relevant information. In order to be a little experimental here, I have added an entry in the connector.h file. Now here comes my first question.
A connector message has a certain type defined in connector.h. Now this connector message structure is something that is completely internal to netlink right? As in, as far as netlink is concerned, this is all but payload. Right?
Moving on, what exactly does the nl-group field mean within the netlink message header structure? The definition does not really contain an element of this name. So are we using overlay techniques to fill certain fields of the netlink message header? And if so, what exactly is the correspondence? I cannot seem to find it anywhere.
So after binding the socket address to the socket, it is sending 10,000 unique pieces of connector based data, which as far as netlink is concerned, is pure payload. But what is strange as far as these messages are concerned is, that all of them seem to have the same sequence number.
Moving on, we find ourselves in the netlink_send subroutine to send these packets via the socket that we are bound to above. This subroutine uses a variety of netlink helper macros to manipulate the data to send. As we say above, the main() function sends 10,000 pieces of data, each of whom is zero-length and requires no acknowledgement, since the ack field is 0 (please correct me if I am wrong here). So each 'packet' is nothing but a connector message header without anything in it. Right?
Now what is surprising is that the netlink_Send function uses the same sequence number as the main() since it is a global variable. However, after the post increment in main(), it is now '1'. So basically our netlink talk is starting with a sequence number of '1'. Is that fine?
Looking into some of the helper macros defined in linux/netlink.h, I will try to summarize my understanding of the ones that are directly or indirectly being used in this program.
#define NLMSG_LENGTH(len) ((len)+NLMSG_ALIGN(NLMSG_HDRLEN))
So this macro will first align the netlink message header length and then add the payload length to it. For our case the netlink payload is a connector header without any payload of its own.
In our case, this micro is used like so
nlh->nlmsg_len = NLMSG_LENGTH(size - sizeof(*nlh));
Here, what I do not understand is the actual payload of the netlink message. In the above case, it is the size of the connector message header (since the connector message itself contains no payload of its own) minus the pointer (which is pointing to the first byte of the netlink message and thereby the netlink message header). And this pointer is (like any other pointer variable) equal to the machine word size which in my case is 4 bytes. Why are we substracting this from the connector message header?
After that, we send the message over this netlink socket just like any other IPv4 socket. hope to hear from you fellows out there with regards to the above mentioned questions. Including some sentences before the actual quesion would help as my post is rather long. But I hope it would be useful to people more than just myself.
Regards.
In an effort to learn both scala and akka I'm writing a Battleship game. I've not started actually writing any code yet, I'm merely thinking about how things would work.
I have agents for ships and player fleets, and messages such as "shot fired", "hit", "miss", "all ships killed". My first stumbling block is that when player 1 shoots it creates a burst of events, and player 2 must wait until all has settled before he can play his turn. How can I make sure of that ? I thought maybe I'd always send a reply no matter what, and then count that a sender receive exactly as many answers as messages sent. Maybe Battleship isn't the best candidate application for agents.
This also brings the question of making the difference between receiving no answer because the message was not processed yet, the agent didn't reply anything, or the agent died. But I'll save that one for later.
There are a number of things you might want to do here:
Have the first actor receive replies to each of its messages and then send a your turn message to the second actor
Send a message to the second actor indicating how many events in a given turn must be received
These are achievable as follows:
import akka.pattern._
(d1 ? m1) zip (d2 ? m2) pipeTo that
in the above example, d1/2 are the destination actors, m1/2 are the messages to be sent. The replies from these actors are zipped together (into a Tuple2) and forwarded on to the second actor (that in the example)
The second mechanism is a bit more involved; I've written similar things using SyncPoints. So, something like this:
case class SyncPoint(id: UUID, participants: ActorRef*)
object SyncPoint {
def newFor(participants: ActorRef*) = SyncPoint(UUID.createRandomUUID, participants)
}
Then the creator of a message sends out a SyncPoint to the ultimate observer first
val sync = SyncPoint.newFor(d1, d2)
that ! sync
Now the ultimate receiver knows it is expecting a message on this SyncPoint for each participant.
d1 ! SyncPart(m1, sync)
d2 ! SyncPart(m2, sync)
Where
case class SyncPart(msg: Any, sync: SyncPoint)
The actors d1 and d2 will forward on to that when they have processed their part in the message.
case class SyncPartial(sync: SyncPoint, participant: ActorRef)
In this way, that knows it is expecting messages from a number of participants and can then track when these participants have performed their processing.