When two clients are connected and are communicating with each other, the client doesn't receive anything until the client sends something itself. It looks like this:
A: Sends message to B "hi"
B: does another command
B: receives "hi"
Related
My client side cannot recv the two messages if the sender sends too quickly.
sender.py
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', int(port)))
sock.listen(1)
conn, addr = sock.accept()
#conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
# sends message 1 and message 2
conn.send(pickle.dumps(message1))
#time.sleep(1)
conn.send(pickle.dumps(message2))
Where both message 1 and message 2 are pickled objects.
client.py
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((ip,int(port)))
message1 = pickle.loads(sock.recv(1024))
print(message1)
message2 = pickle.loads(sock.recv(1024))
When i run this code as it is, i am able to print out message1 but i am unable to receive message2 from the sender. The socket blocks at message2.
Also, if i uncomment time.sleep(1) in my sender side code, i am able to receive both messages just fine. Not sure what the problem is. I tried to flush my TCP buffer everytime by setting TCP_NODELAY but that didnt work. Not sure what is actually happening ? How would i ensure that i receive the two messages
Your code assumes that each send on the server side will match a recv on the client side. But, TCP is byte stream and not a message based protocol. This means that it is likely that your first recv will already contain data from the second send which might be simply discarded by pickle.loads as junk after the pickled data. The second recv will only receive the remaining data (or just block since all data where already received) so pickle.loads will fail.
The common way to deal with this situation is to construct a message protocol on top of the TCP byte stream. This can for example be done by prefixing each message with a fixed-length size (for example as 4 byte uint using struct.pack('L',...)) when sending and for reading first read the fixed-length size value and then read the message with the given size.
I have created a web socket that receives a single message, that will do some processing and returns the response message to the client. I have created web socket using Play framework. The code snippet is given below.
Code snippet:
def multi_request = WebSocket.tryAccept[String] {
request =>
val (out, channel) = Concurrent.broadcast[String]
val in = Iteratee.foreach[String] {
msg =>
channel push ("message " + msg +" request Time: " + System.currentTimeMillis()/1000)
if(msg.equals("1")) {
Thread.sleep(20000);
println(msg);
} else if(msg.equals("2")) {
Thread.sleep(10000);
println(msg);
} else {
println(msg);
}
channel push ("message " + msg +" response Time: " + System.currentTimeMillis()/1000)
}
Future.successful(Right(in, out))
}
I have tested my web socket from the http://www.websocket.org/echo.html.
I have connected my web socket and passed three messages sequentially as "1", 2" and "3". I got the below response while passing these messages.
SENT: 1
RESPONSE: message 1 request Time: 1457351625
SENT: 2
SENT: 3
RESPONSE: message 1 response Time: 1457351645
RESPONSE: message 2 request Time: 1457351646
RESPONSE: message 2 response Time: 1457351656
RESPONSE: message 3 request Time: 1457351656
RESPONSE: message 3 response Time: 1457351656
It seems that, the web socket request hits the server sequentially not In parallel. The three messages sent from the client immediately when I pass it. But it is not hitting server in parallel.
That is, the second request hits after the first response message. The third message hits after the second response message.
Is this the default web socket behaviour?
Or Do I want to implement multi-threading to handle this kind of request in Scala play Framework?
Or Did I miss anything in the code to handle multiple requests from the single client?
I understand this is web socket behaviour. This SO question explains in details how your web socket connection is uniquely identified by the pairs (IP,PORT) for both your client machine and the server as well as by the protocol used.
So basically you can have only one "physical websocket connection" (using the same port) between your client and your server. Looking at the documentation for accept, I read
If no pending connections are present on the queue, and the socket is not marked as nonblocking, accept() blocks the caller until a connection is present. If the socket is marked nonblocking and no pending connections are present on the queue, accept() fails with the error EAGAIN or EWOULDBLOCK.
I would love for someone more knowledgeable to confirm it, but I understand from this quote that since your potential connection is busy handling the first message, accept will tell your second request "try later", hence the sequential effect.
If you really need parallel websockets for one client, I guess opening connections on different ports would do the trick.
So I have a client server based program, where the client will send a request to the server, the server will do a computation and response. This is done via ask.
Specifically the client will receive a message from the client app and send call ask
val response = ask(actorRef, SessionMessage(token, message)).mapTo[ResponseMessage]
The server will receive it like so
val response = sessionMessage.message match {
case message: message1 =>
ask(actorSet.actor1,message)
case message: message2 =>
ask(actorSet.actor2,message)
Where the actorset is literally a set of the different actors.
I then collect the result and send back to the sender
val responseResult = response.mapTo[ResponseMessage]
responseResult pipeTo sender
The problem I'm running into is that for some of the requests, the database query can take a while (5-10 minutes) and when the query completes it sends to dead letters and I get a dissociation and it is unable to associate again and sends to dead letters.
I thought that because it took so long, that the sender would time out (or specifically the sender reference) so I stored the sender reference as a val, and confirmed that by doing this I the sender reference was lost. However, as soon as as the query finishes and I pipe it to the correct sender, it dissociates. Even other queries that take a minute or so don't seem to suffer this problem, only ones that last for a few minutes dissociate and I need to restart the server or the server will keep sending to dead letters.
Even if I do a onComplete then send on success or do an Await.result, the same issue occurs, as soon as it tries to send the message (after completion) the server dissociates and sends to dead letters.
I'm very much at lost as to why this is happening.
The problem you are into is that ask itself has a timeout, which is separate from a timeout you might specify in Await.result. The full signature to ask is:
def ask (actorRef: ActorRef, message: Any)(implicit timeout: Timeout): Future[Any]
This means that if you did not manually provide a value for timeout and did not define an implicit yourself, you must be inheriting one via one of your imports.
To extend the timeout for your particular ask, simply call it with one:
ask(actorRef, SessionMessage(token, message))(15.minutes).mapTo[ResponseMessage]
or if this applies to all asks in scope, declare your own implicit:
implicit val timeout = Timeout(15.minutes)
When I get one message from a non akka client through TCP socket, I need to reply as three messages. In the following sample given below, only the first one goes through properly to the sender (the TCP client which is non AKKA). The rest of the two goes to dead letter. Any idea? Thanks in advance.
object TcpExample {
def main(args: Array[String]): Unit = {
val system = ActorSystem("some-system")
val tcpConsumer = system.actorOf(Props[TcpConsumer])
}
class TcpConsumer extends Consumer {
def endpointUri = "mina2:tcp://localhost:6200?textline=true"
def receive = {
case msg: CamelMessage => {
sender ! msg.bodyAs[String]
sender ! msg.bodyAs[String] // This goes to dead letter
sender ! msg.bodyAs[String] // This goes to dead letter
}
case msg: Failure => sender ! msg
}
}
Without knowing too much about the internals of the akka/camel integration, let me try and demonstrate what's happening here. First, as I mentioned in my comment, the sender in your actor does not directly refer to the TCP client that is on the other side of the system. It's lower level than that; it's whatever ActorRef sent your Consumer the CamelMessage in the first place. So what actor is that? Let me try and explain what I think is happening.
When you set up tcp based a camel consumer, based on the endpointUri there will be a piece of code (from Camel) that is going to bind to the host and port from the endpointUri.
When a new connection request comes in (based on an external client opening a connection to that socket), some sort of actor is probably spun up to handle that individual connection. So there will be 1-n "connection handler" actor instances matching the number of open connections.
When a message comes inbound, it more than likely goes through that connection handler actor. From there, it is either being sent to your consumer via ask (?), or another short lived actor is being spun up to handle that individual message.
Either way, next stop is your consumer, where it's receive function gets hit with a CamelMessage representing the payload from the message sent from the remove client. When this happens, the actors sender is still whatever sent the message in step 3.
Your consumer will now send a message back to the sender and then from there it will eventually be routed back to the connection handler for that connection. In there, it will write back to the socket, in a conversational state. One message in, one message out.
I think your problem is that you are breaking the "one in, one out" paradigm here. When you get your CamelMessage, you are only supposed to respond to that message once, which will evantually trickle back up to the TCP client on the other end of the socket. I don't think the framework expects another response, and that's why you see deadletters for the other two responses.
So this begs the question, what scenario do you have that requires a "1 in, 3 out" paradigm vs the expected "1 in, one out" one that the framework seems to expect?
I've asked on the Play Framework forums, but figured I'd ask here as well for the additional coverage:
Using Play Framework 2.3, I have a WebSocket handled with an actor that I'm using to push "StatusUpdate" messages to connected clients:
def updateSocket = WebSocket.tryAcceptWithActor[StatusUpdate, StatusUpdate] {
implicit request =>
authorized(Set.empty[SecurityRole]).map {
case Right(user) =>
Right({upstream => DashboardListener.props(upstream, user.dblocations)})
case Left(_) =>
Left(Forbidden)
}
}
Everything is working wonderfully, except...
When a user connects via Internet Explorer, and the IE window loses focus, within 20 or so seconds the WebSocket forcibly closes. Firefox, so far, seems not to exhibit this behavior. I used Fiddler to inspect the WebSocket traffic, and it looks like IE is sending a "pong" message after it loses focus:
{"doneTime": "02:08:39.462","messageType": "Pong","messageID": "Client.2",
"wsSession":"WSSession-1","payload": "", "requestPartCount": "1"}
Immediately, the server sends:
{"doneTime": "02:08:39.462","messageType": "Close","messageID": "Server.3",
"wsSession": "WSSession-1","payload": "03-EB-54-68-69-73-20-57-65-62-53-6F-
63-6B-65-74-20-64-6F-65-73-20-6E-6F-74-20-68-61-6E-64-6C-65-20-66-72-61-6D-
65-73-20-6F-66-20-74-68-61-74-20-74-79-70-65", "requestPartCount": "1"}
I'm assuming that this is because my WebSocket doesn't know how to handle pongs (since I've declared incoming and outgoing traffic to be of the StatusUpdate type). Moreover, the client receives a closeEvent with code 1003 (The connection is being terminated because the endpoint received data of a type it cannot accept). I've done some research, and it seems that this ping/pong is supposed to keep the connection alive, but not be exposed to the API. Has anyone run into this before or know of a potential solution?
If it matters, the clients only receive StatusUpdates via this socket -- at no point is any sort of message ever explicitly sent on it. The StatusUpdate messages originate from elsewhere in my Actor system.