TCP data sometimes not received by java (or python) server - sockets

I'm developing a system that consists of an arduino mkr1000 that I want to send data via wifi to a java server program running in my local network.
Everything works except the main part: data sent by the arduino is sometimes not received by the server...
I'm using the arduino Wifi101 library to connect to my wifi, get a WiFiClient and send data.
The following code is just a example to demonstrate the problem:
for (int i = 0; i < 3; ++i) {
Serial.println(F("Connecting to wifi"));
const auto status = WiFi.begin("...", "...");
if (status != WL_CONNECTED) {
Serial.print(F("Could not connect to WiFi: "));
switch (status) {
case WL_CONNECT_FAILED:
Serial.println(F("WL_CONNECT_FAILED"));
break;
case WL_DISCONNECTED:
Serial.println(F("WL_DISCONNECTED"));
break;
default:
Serial.print(F("Code "));
Serial.println(status, DEC);
break;
}
} else {
Serial.println(F("WiFi status: WL_CONNECTED"));
WiFiClient client;
if (client.connect("192.168.0.102", 1234)) {
delay(500);
client.print(F("Test "));
client.println(i, DEC);
client.flush();
Serial.println(F("Data written"));
delay(5000);
client.stop();
} else {
Serial.println(F("Could not connect"));
}
WiFi.end();
}
delay(2000);
}
The java server is based on Netty but the same thing with manually creating and reading from a Socket yields the same result.
The testing code is pretty standard with only a simple output (note: in Kotlin):
val bossGroup = NioEventLoopGroup(1)
val workerGroup = NioEventLoopGroup(6)
val serverFuture = ServerBootstrap().run {
group(bossGroup, workerGroup)
channel(NioServerSocketChannel::class.java)
childHandler(object : ChannelInitializer<NioSocketChannel>() {
override fun initChannel(ch: NioSocketChannel) {
ch.pipeline()
.addLast(LineBasedFrameDecoder(Int.MAX_VALUE))
.addLast(StringDecoder())
.addLast(object : ChannelInboundHandlerAdapter() {
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
println("msg = $msg")
ctx.close()
}
})
}
})
bind(port).sync()
}
The arduino tells that everything is OK (i.e. writing Data written to the serial console for each iteration) but the server sometimes skips individual messages.
Adding the LoggingHandler from Netty tells me in these cases:
11:28:48.576 [nioEventLoopGroup-3-1] WARN i.n.handler.logging.LoggingHandler - [id: 0x9991c251, L:/192.168.0.20:1234 - R:/192.168.0.105:63845] REGISTERED
11:28:48.577 [nioEventLoopGroup-3-1] WARN i.n.handler.logging.LoggingHandler - [id: 0x9991c251, L:/192.168.0.20:1234 - R:/192.168.0.105:63845] ACTIVE
In the cases where the message is received it tells me:
11:30:01.392 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] REGISTERED
11:30:01.394 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] ACTIVE
11:30:01.439 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] READ: 8B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 54 65 73 74 20 32 0d 0a |Test 2.. |
+--------+-------------------------------------------------+----------------+
11:30:01.449 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] CLOSE
11:30:01.451 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] READ COMPLETE
11:30:01.453 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] INACTIVE
11:30:01.464 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] UNREGISTERED
With my understanding this means that the TCP packets are indeed received but in the faulty cases the IO thread from Netty is waiting to read the TCP data but does never continue...
The same problem exists when trying with a rudimentary python server (just waiting for a connection and printing the received data).
I confirmed the data is sent by using tcpflow on Arch Linux with the arguments -i any -C -g port 1234.
I even tried the server on a Windows 7 machine with the same results (TCP packets confirmed with SmartSniff).
Strangely using a java server to send the data always and reproducibly is received...
Does anybody have any idea to solve the problem or at least how to diagnose?
PS: Maybe it is important to note that with tcpflow (i.e. on linux) I could watch the TCP packets being resent to the server.
Does this mean the server is receiving the packets but not sending an ACK?
SmartSniff didn't show the same behavior (but maybe I used wrong options to display the resent packets).

In the meantime I send messages to acknowledge receiving another message. If the acknowledgement is not received the message is sent again.
For anyone with the same problem:
While testing something different I updated the wifi firmware of the board to the latest version 19.5.2. Since then I haven't noticed any lost data. Maybe this was the problem.
See Check WiFi101 Firmware Version and Firmware and certificates Updater.
Note: I couldn't get the sketches to run with the Arduino IDE but with PlatformIO.

Related

MqttBrowserClient fails to connect due to missing conack package

I am trying to make webapp over flutter which will connect to HIVE broker. I took the broker name from the official website, set the port number to 8000 just like mentioned there and still get the error message as below:
error is mqtt-client::NoConnectionException: The maximum allowed connection attempts ({1}) were exceeded. The broker is not responding to the connection request message (Missing Connection Acknowledgement?
I really have no clue how to proceed. Can someone please help?
Below is my code:
MqttBrowserClient mq = MqttBrowserClient(
'wss://broker.mqttdashboard.com:8000', '',
maxConnectionAttempts: 1);
/*
MqttBrowserClient mq = MqttBrowserClient('ws://test.mosquitto.org', 'client-1',
maxConnectionAttempts: 1);
*/
class mqttService {
Future<MqttBrowserClient?> connectToServer() async {
try {
final connMess = MqttConnectMessage()
.withClientIdentifier('clientz5tWzoydVL')
.authenticateAs('a14guguliye', 'z5tWzoydVL')
.withWillTopic('willtopic')
.withWillMessage('My Will message')
.startClean() // Non persistent session for testing
.withWillQos(MqttQos.atLeastOnce);
mq.port = 1883;
mq.keepAlivePeriod = 50;
mq.connectionMessage = connMess;
mq.websocketProtocols = MqttClientConstants.protocolsSingleDefault;
mq.onConnected = onConnected;
var status = await mq.connect();
return mq;
} catch (e) {
print("error is " + e.toString());
mq.disconnect();
return null;
}
}
}
That port 8000 may be open but the HiveMQ broker may not be listening.
Make sure that the broker is fully booted and binds to that IP:Port combo.
In the HiveMQ broker startup output, you should see something similar to:
Started Websocket Listener on address 0.0.0.0 and on port 8000
If needed, the HiveMQ Broker configuration documentation is here.
You can use the public HiveMQ MQTT Websocket demo client to test your connection to make sure it's not a local code issue.
As a last option, use Wireshark to monitor MQTT traffic with a filter of tcp.port == 8000 and mqtt

Asyncio - RELIABLY Always Close Straggling TCP Connections

I have a program which connects to a bunch of hosts and checks if they are "socket reflectors". Basically, it is scanning a bunch of ips and doing this:
Connect, check - if there is data, is that data the same as what I am sending? Yes, return true, no return false. No data, return false.
For some reason, asyncio is not reliably closing TCP connections after they time out. I attribute this to the fact that a lot of these hosts I am connecting to are god knows what, and maybe just buggy servers. Be that as it may, there must be a way to make this force timeout? When I run this, it hangs after a while. Out of 12,978 hosts, about 12,768 of them complete. Then I end up with a bunch of open ESTABLISHED connections! Why does this happen?
I need it close the connection if nothing happens during the given timeout period.
async def tcp_echo_client(message, host_port, loop, connection_timeout=10):
"""
Asyncio TCP echo client
:param message: data to send
:param host_port: host and port to connect to
:param loop: asyncio loop
"""
host_port_ = host_port.split(':')
try:
host = host_port_[0]
port = host_port_[1]
except IndexError:
pass
else:
fut = asyncio.open_connection(host, port, loop=loop)
try:
reader, writer = await asyncio.wait_for(fut, timeout=connection_timeout)
except asyncio.TimeoutError:
print('[t] Connection Timeout')
return 1
except Exception:
return 1
else:
if args.verbosity >= 1:
print('[~] Send: %r' % message)
writer.write(message.encode())
writer.drain()
data = await reader.read(1024)
await asyncio.sleep(1)
if data:
if args.verbosity >= 1:
print(f'[~] Host: {host} Received: %r' % data.decode())
if data.decode() == message:
honeypots.append(host_port)
writer.close()
return 0
else:
filtered_list.append(host_port)
print(f'[~] Received: {data.decode()}')
writer.close()
return 1
else:
filtered_list.append(host_port)
writer.close()
if args.verbosity > 1:
print(f'[~] No data received for {host}')
return 1
What am I doing wrong?

wdt resets on sending REST API subscribe request to PubNub, via esp8266

I am using this code to connect NodeMCU esp8266 with PubNub. Publish works fine but subscribe is partially works for a while and then causes the controller to reset. The code properly follows the subscribe message time stamps mechanism. While observing at Subscribe code, three steps are done in a loop;
//1. connecting to pubsub.pubnub.com
if (!client.connect(host, 80))
{
Serial.println("connection failed");
return;
}
//2. making and sending the GET request to subscribe
url = "/subscribe/";
url += subKey;
url += "/";
url += channel;
url += "/0/";
url += timeToken;
//Serial.println(url);
client.print(String("GET ") + url + " HTTP/1.1\r\n" +
"Host: " + host + "\r\n" +
"Connection: close\r\n\r\n");
delay(10);
//3. finally listening the received msg response
while (client.available())
{
String line = client.readStringUntil('\r');
if (line.endsWith("]"))
{
Serial.println(line);
json_handler(string_parser(line)); // handling the received msg
}
}
First step is causing the controller to reset (soft wdt resets),
Soft WDT reset
ctx: cont
sp: 3ffef920 end: 3ffefcb0 offset: 01b0
>>>stack>>>
3ffefad0: 00000000 3ffefff8 3ffefff8 40204083
3ffefae0: 402017b8 00000000 3ffefff8 40204083
3ffefaf0: 00000000 3ffefff8 00000000 40202152
3ffefb00: 001e8480 3ffe0001 3ffefd38 40202152
3ffefb10: 00000000 00000040 00000000 00000000
I observed this by putting the first step (connecting to host), from void loop() to setup(), and I receive initial GET response as [[],"14970123776801072"], but after that, the connection is closed in step 2, so I will not receive any further messages from my subscribed channel. I tried to NOT close the connection at step 2 and it seems to work well, but with delays and sometimes receiving same message twice. I know this is not the ideal way of doing it. So my question is: Do we need to continuously make new connection and sent the GET request to receive messages from our channel and close the connection ? if yes, than what is actually causing the controller to reset ?
or is there any way of sending subscribe GET request once and then always wait for received messages ?

JavaMail IdleManager throws "Folder is not using SocketChannels" exception after a while

I'm using IdleManager, in Scala, to listen to a Gmail folder.
I already have this props.setProperty("mail.imaps.usesocketchannels", "true")
The main part of my code is like this:
folder.addMessageCountListener(new MessageCountAdapter() {
override def messagesAdded(ev: MessageCountEvent) {
Logger.info("Got " + ev.getMessages.length + " new messages")
idleManager.watch(folder)
}
})
// timeLength = 20 minutes
system.scheduler.schedule(initialDelay = timeLength, interval = timeLength) {
try {
folder.asInstanceOf[IMAPFolder].doCommand(new IMAPFolder.ProtocolCommand() {
def doCommand(p: IMAPProtocol) = {
p.simpleCommand("NOOP", null)
null
}
})
Logger.debug("Continue after sending NOOP")
idleManager.watch(folder)
} catch {
case e: Exception => Logger.error(s"MailHelper: ${e.getMessage}")
}
}
idleManager.watch(folder)
You can see that I let the idleManager continue watching the folder after I get new messages and after I send a NOOP command. A scheduler is created to periodically (currently, once in 20 minutes) send a NOOP command to the server, to keep the connection. My program worked fine, but just for a while.
14 hours after the first call idleManager.watch(folder) and about 12.5 hours after the last email received, I still got the log Continue after sending NOOP, but right after that is an error log MailHelper: Folder is not using SocketChannels.
Could you please help me with this?
Edited:
Thanks #BillShannon for your quick reply. I have updated from v1.5.2 to v1.5.6 and turned on the debug output. I'm sure the Properties object and the "store" instance (created from a Session with the "imaps" protocol) are unchanged.
The error has appeared again. After a call to idleManager.watch(folder), here is the log ([folder] is the imaps protocol string for my folder)
DEBUG IMAP: IdleManager watching [folder]
A385 IDLE
+ idling
DEBUG IMAP: startIdle: set to IDLE
DEBUG IMAP: startIdle: return true
DEBUG IMAP: IdleManager.watch startIdle succeeded for [folder]
DEBUG IMAP: IdleManager selected 0 channels
DEBUG IMAP: IdleManager adding [folder] to selector
DEBUG IMAP: IdleManager waiting...
DEBUG IMAP: IdleManager selected 1 channels
DEBUG IMAP: IdleManager selected folder: [folder]
DEBUG IMAP: handleIdle: set to RUNNING
DEBUG IMAP: IdleManager got exception for folder: [folder], THROW:
javax.mail.FolderClosedException: * BYE JavaMail Exception: java.io.IOException: Connection dropped by server?
at com.sun.mail.imap.IMAPFolder.handleIdle(IMAPFolder.java:3199)
at com.sun.mail.imap.IdleManager.processKeys(IdleManager.java:370)
at com.sun.mail.imap.IdleManager.select(IdleManager.java:281)
at com.sun.mail.imap.IdleManager.access$200(IdleManager.java:137)
at com.sun.mail.imap.IdleManager$1.run(IdleManager.java:164)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
DEBUG IMAP: IdleManager waiting...
20 minutes later, the program sent another "NOOP", the status returned was "OK". And the program called idleManager.watch(folder) once again, here the error log reappeared Folder is not using SocketChannels.
Do you need anything else? Is this an issue with the library?

spray-routing with spray-can - Hand off to actor only works sometimes in my application

Sorry, this is kind of long, because I need to include various files.
Problem
I am not sure what is going on in my setup [using spray 1.3.3]. I am trying to do file uploads using chunked requests which seemed to work as expected once or twice, but for some reason most of the time the actor never receives the chunks after the initial registration of the chunk handler is finished. It just disappears into oblivion and my request and logs just keep waiting the whole time. The 2 times out of 50 it did work were when I ran through the debugger. However even with the debugger it doesn't work mostly.
Based on various examples and discussions related to DemoService and FileUploadHandler, I am using spray-can to check the received HttpMessage for chunks, and at that point spawn off a separate route. I use curl with chunked encoding to test my output.
Please help! I have spent too many hours trying to get chunked requests mixed with routes working for my use case.
Code
Here is the code I have:
TestApp.scala
object TestApp extends App with GlobalConfig {
implicit val system = ActorSystem("TestApp")
implicit val ec = system.dispatcher
val healthActor = system.actorOf(Props[HealthStateActor])
val routes = new HealthcheckController(healthActor).route ~
new ResourceController().route
val requestRouter = system.actorOf(Props(new HttpRequestCustomHandler(routes)))
IO(Http) ! Http.Bind(requestRouter, "0.0.0.0", HttpBindPort)
}
FileUploadActor.scala
class FileUploadActor(client: ActorRef, requestMetadata: RequestMetadata, request: HttpRequest, ctx: RequestContext)
extends Actor with ActorLogging with GlobalConfig {
import request._
var bytesWritten = 0L
var bytes: Array[Byte] = "".getBytes
// client ! CommandWrapper(SetRequestTimeout(Duration.Inf)) // cancel timeout
def receive = {
case c: MessageChunk =>
log.info(s"Got ${c.data.length} bytes of chunked request $method $uri")
bytes ++= c.data.toByteArray
bytesWritten += c.data.length
case e: ChunkedMessageEnd =>
log.info(s"Got end of chunked request $method $uri. Writing $bytesWritten bytes for upload: $requestMetadata")
Try(saveFile(requestMetadata)) match {
case Success(_) => ctx.complete(HttpResponse(StatusCodes.Created, entity = "success"))
case Failure(f) => f.printStackTrace(); ctx.complete(HttpResponse(StatusCodes.InternalServerError, entity = "failure"))
}
// client ! CommandWrapper(SetRequestTimeout(UploadRequestTimeout.seconds)) // reset timeout to original value
context.stop(self)
}
}
FileUploadService.scala
The RegisterChunkHandler message is the last step where I see the debugger stop at break points, and where the logs go quiet. When it does work I can see MessageChunk messages being received by FileUploadActor.
trait FileUploadService extends Directives {
this: Actor with ActorLogging with GlobalConfig =>
def chunkedRoute() = {
path(resourceAPI / "upload" / "resource" / Segment) { resourceId =>
put {
detach() {
ctx => {
val request = ctx.request
val client = sender()
val handler = context.actorOf(Props(new FileUploadActor(client,
RequestMetadata(....),
request, ctx)))
sender ! RegisterChunkHandler(handler)
}
}
}
}
}
}
HttpRequestCustomHandler.scala
class HttpRequestCustomHandler(routes: Route, resourceProviderRef: ResourceProvider)
extends HttpServiceActor
with FileUploadService
with ActorLogging
with GlobalConfig {
val normal = routes
val chunked = chunkedRoute()
def resourceProvider = resourceProviderRef
val customReceive: Receive = {
// clients get connected to self (singleton handler)
case _: Http.Connected => sender ! Http.Register(self)
case r: HttpRequest =>
normal(RequestContext(r, sender(), r.uri.path).withDefaultSender(sender()))
case s#ChunkedRequestStart(HttpRequest(PUT, path, _, _, _)) =>
chunked(RequestContext(s.request, sender(), s.request.uri.path).withDefaultSender(sender()))
}
override def receive: Receive = customReceive
}
HttpRequestHandler.scala
abstract class HttpRequestHandler(routes: Route) extends HttpServiceActor{
override def receive: Receive = runRoute(routes)
}
application.conf:
spray.can.server {
request-timeout = 20 s
pipelining-limit = disabled
reaping-cycle = infinite
stats-support = off
request-chunk-aggregation-limit = 0
parsing.max-content-length = 100000000
parsing.incoming-auto-chunking-threshold-size = 15000000
chunkless-streaming = on
verbose-error-messages = on
verbose-error-logging = on
}
curl Success:
curl -vvv -X PUT -H "Content-Type: multipart/form-data" -d
'#/Users/abc/Documents/test.json'
'http://localhost:8180/upload/resource/02521081'
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8100 (#0)
> PUT /upload/resource/02521081 HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8100
> Accept: */*
> Content-Type: multipart/form-data
> Content-Length: 82129103
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 201 Created
* Server spray-can/1.3.3 is not blacklisted
< Server: spray-can/1.3.3
< Date: Mon, 17 Aug 2015 07:45:58 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 7
<
* Connection #0 to host localhost left intact
success
Failure with same curl (waits forever):
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8100 (#0)
> PUT /upload/resource/02521081 HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8100
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: multipart/form-data
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
^C
Failure (waits forever):
resource 2015-08-17 01:33:09.374 [Resource] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
resource 2015-08-17 01:33:09.396 08:33:09.382UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - logger log1-Slf4jLogger started
resource 2015-08-17 01:33:09.404 08:33:09.383UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - Default Loggers started
resource 2015-08-17 01:33:10.160 08:33:10.159UTC [Resource] INFO spray.can.server.HttpListener Resource-akka.actor.default-dispatcher-4 akka://resource/user/IO-HTTP/listener-0 - Bound to /0.0.0.0:8100
Success (logs edited for clarity):
resource 2015-08-17 00:42:12.283 [Resource] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
resource 2015-08-17 00:42:12.295 07:42:12.290UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - logger log1-Slf4jLogger started
resource 2015-08-17 00:42:12.308 07:42:12.291UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - Default Loggers started
resource 2015-08-17 00:42:13.007 07:42:13.005UTC [Resource] INFO spray.can.server.HttpListener Resource-akka.actor.default-dispatcher-4 akka://resource/user/IO-HTTP/listener-0 - Bound to /0.0.0.0:8100
resource 2015-08-17 00:43:47.615 07:43:47.529UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 131072 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
resource 2015-08-17 00:43:49.220 07:43:49.204UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 131072 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
.
.
.
resource 2015-08-17 00:44:05.605 07:44:05.605UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 45263 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
resource 2015-08-17 00:44:05.633 07:44:05.633UTC [Resource] INFO c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got end of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98. Writing 82129103 bytes for upload: RequestMetadata(...,multipart/form-data)
resource 2015-08-17 00:44:05.634 07:44:05.633UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - actor is akka://resource/user/$b/$b, sender is Actor[akka://resource/temp/$a], client is Actor[akka://resource/temp/$a]
resource 2015-08-17 00:45:58.445 [Resource] DEBUG com.abc.resource.io.FileClient$ - UploadResult#109a69fb
resource 2015-08-17 00:45:58.445 [Resource] DEBUG com.abc.resource.io.FileClient$ - upload is done: true
Please let me know if you see anything weird. What is usually the reason an actor would vanish like this? Thanks in advance for your help!
UPDATE:
I added further logging and see that the from and to actors are apparently both turning into deadLetters even though that is clearly not the case in the log line above that. This is when message 'RegisterChunkHandler' is sent to sender in FileUploadService.scala.
sender ! RegisterChunkHandler(handler)
Related log:
resource 2015 - 0 8 - 17 21: 14: 32.173 20: 14: 32.173 UTC[Resource] DEBUG c.l.a.io.HttpRequestCustomHandler Resource - akka.actor.default - dispatcher - 3 akka :// Resource / user / httpcustomactor - sender is Actor[akka :// Resource / temp / $a]
resource 2015 - 0 8 - 17 21: 14: 32.175 20: 14: 32.175 UTC[Resource] INFO akka.actor.DeadLetterActorRef A4Resource-akka.actor.default-dispatcher-6 akka://A4Resource/deadLetters - Message [spray.can.Http$RegisterChunkHandler] from Actor[akka://A4Resource/user/httpcustomactor#-1286373908] to Actor[akka://A4Resource/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
resource 2015 - 0 8 - 17 21: 14: 32.176 20: 14: 32.176 UTC[Resource] DEBUG c.l.resource.actor.FileUploadActor Resource - akka.actor.default - dispatcher - 7 akka :// Resource / user / httpcustomactor / $a - pre - start
Any idea how this can be avoided?