I will explain setup first;
Setup: I have a microcontroller board running a Coap rest server (using Contiki OS) with an observable resource and a client (using Coapthon - python library for the Coap) observing that resource running on a Linux SOM. I am successfully able to observe a small amount of data (64 bytes)from the server (microcontroller) to the client (Linux SOM). I will add code at the end after describing everything.
Question: I need help in sending a big chunk of data (suppose 1024 bytes) from the Coap server to the client observer. How can I do that (thanks in advance for any kind of help, I will appreciate for any help I can get regarding this)?
I am posting Contiki observable resource code and the coapthon client code (I am posting the code which not sending big data).
Contiki Code:
char * temp_payload = "Behold empty data";
PERIODIC_RESOURCE(res_periodic_ext_temp_data,
"title=\"Temperature\";rt=\"Temperature\";obs",
res_get_handler_of_periodic_ext_temp_data,
NULL,
NULL,
res_delete_handler_ext_temp_data,
(15*CLOCK_SECOND),
res_periodic_handler_of_ext_temp_data);
static void
res_get_handler_of_periodic_ext_temp_data(void *request, void *response, uint8_t *buffer, uint16_t preferred_size, int32_t *offset)
{
/*
* For minimal complexity, request query and options should be ignored for GET on observable resources.
* Otherwise the requests must be stored with the observer list and passed by REST.notify_subscribers().
* This would be a TODO in the corresponding files in contiki/apps/erbium/!
*/
/* Check the offset for boundaries of the resource data. */
if(*offset >= 1024) {
REST.set_response_status(response, REST.status.BAD_OPTION);
/* A block error message should not exceed the minimum block size (16). */
const char *error_msg = "BlockOutOfScope";
REST.set_response_payload(response, error_msg, strlen(error_msg));
return;
}
REST.set_header_content_type(response, REST.type.TEXT_PLAIN);
REST.set_response_payload(response,(temp_payload + *offset), MIN( (int32_t)strlen(temp_payload) - *offset, preferred_size));
REST.set_response_status(response, REST.status.OK);
/* IMPORTANT for chunk-wise resources: Signal chunk awareness to REST engine. */
*offset += preferred_size;
/* Signal end of resource representation. */
if(*offset >= (int32_t)strlen( temp_payload) + 1) {
*offset = -1;
}
REST.set_header_max_age(response, MAX_AGE);
}
I am not adding code for periodic handler, get handler is getting notified from periodic handler periodically.
Coapthon code:
def ext_temp_data_callback_observe(response):
print response.pretty_print()
def observe_ext_temp_data(host, callback):
client = HelperClient(server=(host, port))
request = Request()
request.code = defines.Codes.GET.number
request.type = defines.Types["CON"]
request.destination = (host, port)
request.uri_path = "data/res_periodic_ext_temp_data"
request.content_type = defines.Content_types["text/plain"]
request.observe = 0
request.block2 = (0, 0, 64)
try:
response = client.send_request(request, callback)
print response.pretty_print()
except Empty as e:
print("listener_post_observer_rate_of_change({0}) timed out". format(host))
Again, I need help in implementing observer with coap block wise transfer (https://www.rfc-editor.org/rfc/rfc7959#page-26).
I can't tell much about the particular systems you use, but in general the combination of block-wise transfer and observes works in that the server only sends the first block of the updated resource. It is then up to the client to ask for the remaining blocks, and to verify that their ETag options match.
The contiki code looks like it should be sufficient, as it sets the offset to -1 which probably sets the "more data" bit in the block header.
On the coapython side, you may need to do reassembly manually, or ask that coapython do the reassembly automatically (its code does not indicate that it'd support the combination of blockwise and observe, at least not at a short glance).
To "bootstrap" your development you may consider to use Eclipse/Californium.
The simple client in demo-apps/cf-helloworld-client requires some change for the observe. If you need help, just open an issue in github.
With a two years of experience with that function, let me mention, that, if your data changes faster than your "bandwidth" is able to transfer (including the considered RTT for the blocks), you may send a lot of blocks in vain.
If the data changes just faster than your last block could be send, that invalidates the complete transfer so far. Some start then to develop their work-around, but from that your on very thin ice :-).
Related
I'm brand new to working with networking, protocols, and sockets but this issue has been bothering me for a few days and I just cannot seem to find a solution. I am using Seagull, an open source multi-protocol traffic generator (source code), to create a client with a custom UDP. The unfortunate thing is that no one really keeps up with this software anymore and other people have had this problem but there are no solutions and it may be a bug in the generator software. I was able to write the XML scripts needed to run the traffic generator and when I ran the client on the local loopback (127.0.0.1) the generator worked fine and I was able to collect the packets and analyze them with Wireshark and they contained the correct data.
I'm now trying to use this client to send messages to a server on my local network (192.x.x.x) but Seagull keeps failing to send the messages. It's not a network issue because I've been able to ping the address with no packet loss. I've traced the source of the error back to the sendto() function, which keeps failing due to an invalid argument. I've stepped through the code using GDB when the destination was set to both the local loopback and the other IP and the arguments passed to the sendto() function were the exact same with the exception of different IP addresses in the sockaddr struct, which is of course expected. However, when I look at the registers after the system call in sendto() for the the one that contains the value for the length of the message length turns negative partway through the call and that is the value that is returned from the function call- this does not happen with the local loopback network. Here is the section of code that calls sendto() and fails:
size_t C_SocketWithData::send_buffer(unsigned char *P_data,
size_t P_size){
// T_SockAddrStorage *P_remote_sockaddr,
// tool_socklen_t *P_len_remote_sockaddr) {
size_t L_size = 0 ;
int L_rc ;
if (m_write_buf_size != 0) {
// Try to send pending data
if (m_type == E_SOCKET_TCP_MODE) {
L_rc = _call_write(m_write_buf, m_write_buf_size) ;
} else {
L_rc = _write(m_write_buf, m_write_buf_size) ;
}
if (L_rc < 0) {
SOCKET_ERROR(0,
"send failed [" << L_rc << "] [" << strerror(errno) << "]");
switch (errno) {
case EAGAIN:
SOCKET_ERROR(0, "Flow control not implemented");
break ;
case ECONNRESET:
break ;
default:
SOCKET_ERROR(0, "process error [" << errno << "] not implemented");
break ;
}
return(0);
where _write() is a wrapper for sendto().
I'm not really sure what is going on that causes it to do this and I've spent hours looking through the source code and tracing what is going on but everything seems normal up until the buffer length is modified in the system call. I've looked at the socket() initialization, binding, and other functions but everything seems fine. If anyone has any experience with Seagull or this problem, please let me know if you have had any suggestions. I've looked through almost every sendto() related question on this website and have not found a solution.
I am running the client on Ubuntu v 14.04 through a VM (Virtualbox) on a Windows 10 host, where I'm trying to send the messages. Thanks in advance!
I figured out the answer to this after days of debugging and looking through source code and I want to update this in case any poor soul in the future has the same problem. The original Seagull implementation always tries to bind the socket before calling send/sendto. In this case, since sendto automatically binds the socket I was able to remove the binding for this case.
Original implementation in C_SocketClient::_open (C_Socket.cpp Line 666):
} else {
L_rc = call_bind(m_socket_id,
(sockaddr *)(void *)&(m_remote_addr_info->m_addr_src),
SOCKADDR_IN_SIZE(&(m_remote_addr_info->m_addr_src)));
Edited Version:
} else {
L_rc = call_bind(m_socket_id,
/* UDP Does not need to bind first */
if(m_type != E_SOCKET_UDP_MODE) {
L_rc = call_bind(m_socket_id,
(sockaddr *)(void *)&(m_remote_addr_info->m_addr_src),
SOCKADDR_IN_SIZE(&(m_remote_addr_info->m_addr_src)));
} else {
L_rc = 0;
}
Now Seagull works and I am able to send my custom protocol! I opened a pull request for the original source code so that this can possibly be fixed.
I apologize before hand if some of these questions might be obvious for expert network programmers. I have researched and read about coding in networking and it is still not clear to me how to do this.
Assume that I want to write a tcp proxy (in go) with the connection between some TCP client and some TCP server. Something like this:
First assume that these connection are semi-permanent (will be closed after a long long while) and I need the data to arrive in order.
The idea that I want to implement is the following: whenever I get a request from the client, I want to forward that request to the backend server and wait (and do nothing) until the backend server responds to me (the proxy) and then forward that response to the client (assume that both TCP connection will be maintained in the common case).
There is one main problem that I am not sure how to solve. When I forward the request from the proxy to the server, and get the response, how do I know when the server has sent me all the information that I need if I do not know beforehand the format of the data being sent from the server to the proxy (i.e. I don't know if the response from the server is of the form of type-length-value scheme nor do I know if `\r\n\ indicates the end of the message form the server). I was told that I should assume that I get all the data from the server connection whenever my read size from the tcp connection is zero or smaller than the read size that I expected. However, this does not seem correct to me. The reason it might not be correct in general is the following:
Assume that the server for some reason is only writing to its socket one byte at a time but the total length of the response to the "real" client is much much much longer. Therefore, isn't it possible that when the proxy reads the tcp socket connected to the server, that the proxy only reads one byte and if it loops fast enough (to do a read before it receives more data), then read zero and incorrectly concludes that It got all the message that the client intended to receive?
One way to fix this might be to wait after each read from the socket, so that the proxy doesn't loop faster than it gets bytes. The reason that I am worried is, assume there is a network partition and i can't talk to the server anymore. However, it is not disconnected from me long enough to timeout the TCP connection. Thus, isn't it possible that I try to read from the tcp socket to the server again (faster than I get data) and read zero and incorrectly conclude that its all the data and then send it pack to the client? (remember, the promise I want to keep is that I only send whole messages to the client when i write to the client connection. Thus, its illegal to consider correct behaviour if the proxy goes, reads the connection again at a later time after it already wrote to the client, and sends the missing chunk at a later time, maybe during the response of a different request).
The code that I have written is in go-playground.
The analogy that I like to use to explain why I think this method doesn't work is the following:
Say we have a cup and the proxy is drinking half the cup every time it does a read from the server, but the server only puts 1 teaspoon at a time. Thus, if the proxy drinks faster than it gets teaspoons it might reach zero too soon and conclude that its socket is empty and that its ok to move on! Which is wrong if we want to guarantee we are sending full messages every time. Either, this analogy is wrong and some "magic" from TCP makes it work or the algorithm that assumes until the socket is empty is just plain wrong.
A question that deals with a similar problems here suggests to read until EOF. However, I am unsure why that would be correct. Does reading EOF mean that I got the indented message? Is an EOF sent each time someone writes a chunk of bytes to a tcp socket (i.e. I am worried that if the server writes one byte at a time, that it sends 1 EOF per bytes)? However, EOF might be some of the "magic" of how a TCP connection really works? Does sending EOF's close the connection? If it does its not a method that I want to use. Also, I have no control of what the server might be doing (i.e. I do not know how often it wants to write to the socket to send data to the proxy, however, its reasonable to assume it writes to the socket with some "standard/normal writing algorithm to sockets"). I am just not convinced that reading till EOF from the socket from server is correct. Why would it? When can I even read to EOF? Are EOFs part of the data or are they in the TCP header?
Also, the idea that I wrote about putting a wait just epsilon bellow the time-out, would that work in the worst-case or only on average? I was also thinking, I realized that if the Wait() call is longer than the time-out, then if you return to the tcp connection and it doesn't have anything, then its safe to move on. However, if it doesn't have anything and we don't know what happened to the server, then we would time out. So its safe to close the connection (because the timeout would have done that anyway). Thus, I think if the Wait call is at least as long as the timeout, this procedure does work! What do people think?
I am also interested in an answer that can justify maybe why this algorithm work on some cases. For example, I was thinking, even if the server only write a byte at a time, if the scenario of deployment is a tight data centre, then on average, because delays are really small and the wait call is almost certainly enough, then wouldn't this algorithm be fine?
Also, are there any risks of the code I wrote getting into a "deadlock"?
package main
import (
"fmt"
"net"
)
type Proxy struct {
ServerConnection *net.TCPConn
ClientConnection *net.TCPConn
}
func (p *Proxy) Proxy() {
fmt.Println("Running proxy...")
for {
request := p.receiveRequestClient()
p.sendClientRequestToServer(request)
response := p.receiveResponseFromServer() //<--worried about this one.
p.sendServerResponseToClient(response)
}
}
func (p *Proxy) receiveRequestClient() (request []byte) {
//assume this function is a black box and that it works.
//maybe we know that the messages from the client always end in \r\n or they
//they are length prefixed.
return
}
func (p *Proxy) sendClientRequestToServer(request []byte) {
//do
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
// Intended behaviour: waits until ALL of the response from backend server is obtained.
// What it does though, assumes that if it reads zero, that the server has not yet
// written to the proxy and therefore waits. However, once the first byte has been read,
// keeps writting until it extracts all the data from the server and the socket is "empty".
// (Signaled by reading zero from the second loop)
func (p *Proxy) receiveResponseFromServer() (response []byte) {
bytesRead, _ := p.ServerConnection.Read(response)
for bytesRead == 0 {
bytesRead, _ = p.ServerConnection.Read(response)
}
for bytesRead != 0 {
n, _ := p.ServerConnection.Read(response)
bytesRead += n
//Wait(n) could solve it here?
}
return
}
func (p *Proxy) sendServerResponseToClient(response []byte) {
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
func main() {
proxy := &Proxy{}
proxy.Proxy()
}
Unless you're working with a specific higher-level protocol, there is no "message" to read from the client to relay to the server. TCP is a stream protocol, and all you can do is shuttle bytes back and forth.
The good news is that this is amazingly easy in go, and the core part of this proxy will be:
go io.Copy(server, client)
io.Copy(client, server)
This is obviously missing error handling, and doesn't shut down cleanly, but clearly shows how the core data transfer is handled.
I am trying to connect to Apple Push Notification Service which uses a simple binary protocol over TCP protected with TLS (or SSL). The protocol indicates that when an error is encountered (there are about 10 well defined error conditions) APNS will send back an error response and then close the connection. This results in a half closed socket because the remote peer closed the socket. I can see its a graceful shutdown because APNS sends a FIN and RST using tcpdump.
Out of all the error conditions, I can deal with most before sending with validation. The situation in which this fails is when a notification is sent to an invalid device token which cannot be dealt with that easily because the tokens could be malformed. Tokens are opaque 32 byte values that are provided by APNS to a device and then registered with me. I have no way of knowing if it is valid when submitted to my service. Presumably APNS checksums the tokens in some way that they can do quick validation on the token fast.
Anyway,
I did what I thought was the right thing:-
a. open socket
b. try writing
c. if write failed, read the error response
Unfortunately, this doesn't seem to work. I figure APNS is sending an error response and I am not reading it back right or I am not setting the socket up right. I have tried the following techniques:-
Use a separate thread per socket to try-read the response if any every 5ms or so.
Use a blocking read after write failure.
Use a final read after remote disconnect.
I have tried this with C# + .NET 4.5 on Windows and Java 1.7 on Linux. In either case, I never seem to get the error response and the socket indicates that no data is available to read.
Are half-closed sockets supported on these operating systems and/or frameworks? There isn't anything that seems to indicate either way.
I know that the way I am setting things up works correctly because if I use a valid token with a valid notification, those do get delivered.
In response to one of the comments, I am using the enhanced notification format so a response should arrive from APNS.
Here is the code I have for C#:-
X509Certificate certificate =
new X509Certificate(#"Foo.cer", "password");
X509CertificateCollection collection = new X509CertificateCollection();
collection.Add(certificate);
Socket socket =
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect("gateway.sandbox.push.apple.com", 2195);
NetworkStream stream =
new NetworkStream(socket, System.IO.FileAccess.ReadWrite, false);
stream.ReadTimeout = 1000;
stream.WriteTimeout = 1000;
sslStream =
new SslStream(stream, true,
new RemoteCertificateValidationCallback(ValidateServerCertificate), null);
sslStream.AuthenticateAsClient("gateway.sandbox.push.apple.com", collection,
SslProtocols.Default, false);
sslStream.ReadTimeout = 10000;
sslStream.WriteTimeout = 1000;
// Task rdr = Task.Factory.StartNew(this.reader);
// rdr is used for parallel read of socket sleeping 5ms between each read.
// Not used now but another alternative that was tried.
Random r = new Random(DateTime.Now.Second);
byte[] buffer = new byte[32];
r.NextBytes(buffer);
byte[] resp = new byte[6];
String erroneousToken = toHex(buffer);
TimeSpan t = (DateTime.UtcNow - new DateTime(1970, 1, 1));
int timestamp = (int) t.TotalSeconds;
try
{
for (int i = 0; i < 1000; ++i)
{
// build the notification; format is published in APNS docs.
var not = new ApplicationNotificationBuilder().withToken(buffer).withPayload(
#'{"aps": {"alert":"foo","sound":"default","badge":1}}').withExpiration(
timestamp).withIdentifier(i+1).build();
sslStream.Write(buffer);
sslStream.Flush();
Console.Out.WriteLine("Sent message # " + i);
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd);
break;
}
// doesn't really matter how fast or how slow we send
Thread.Sleep(500);
}
}
catch (Exception ex)
{
Console.Out.WriteLine("Failed to write ...");
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd); ;
}
}
// rdr.Wait(); change to non-infinite timeout to allow error reader to terminate
I implemented server side for APNS in Java and have problems reading the error responses reliably (meaning - never miss any error response), but I do manage to get error responses.
You can see this related question, though it has no adequate answer.
If you never manage to read the error response, there must be something wrong with your code.
Using a separate thread for reading worked for me, though not 100% reliable.
Use a blocking read after write fail - that's what Apple suggest to do, but it doesn't always work. It's possible that you send 100 messages, and the first has an invalid token, and only after the 100th message you get a write failure. At this point it is sometimes too late to read the error response from the socket.
I'm not sure what you mean there.
If you want to guarantee that the reading of the error responses will work, you should try to read after each write, with a sufficient timeout. This, of course, is not practical for using in production (since it's incredibly slow), but you can use it to verify that your code of reading and parsing the error response is correct. You can also use it to iterate over all the device tokens you have, and find all the invalid ones, in order to clean your DB.
You didn't post any code, so I don't know what binary format you are using to send messages to APNS. If you are using the simple format (that starts with a 0 byte and has no message ID), you won't get any responses from Apple.
I need to get the whole message(response), but socket.ReceiveBytes(); returns just part of the message. I tried to loop it but it fails on timeout when no bytes to receive.
List<byte> lb = new List<byte>();
byte[] receivedMsg = socket.ReceiveBytes();
while (receivedMsg.Length > 0)
{
lb.AddRange(receivedMsg);
receivedMsg = socket.ReceiveBytes();
}
So, how I can check if there are byte to read? How I can read the whole message?
Since its a Chilkat implementation, you should probably contact the developer. But I found this that could help: http://www.cknotes.com/?p=302
Ultimately, you need to know how much to read from the socket to constitute a whole message. For example, if the overlying protocol is a portmapper, then you know that you are expecting messsages in the format that the RFC specifies (http://tools.ietf.org/html/rfc1833.)
If you are rolling your own protocol over a socket connection, then use the method in the Chilkat blog post about putting the size of the total message in the first 4 bytes.
I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.