need some clarifications on GSEvent - iphone

I am looking at the file attached here GSEvent.h:
I am interesting to know the following params when the user press the screen.
i.e like pathPressure pathMajorRadius pathProximity etc.. (I do not want to set these values by myself, but to receive them from the user when he/she press the screen.)
typedef struct GSPathInfo {
unsigned char pathIndex; // 0x0 = 0x5C
unsigned char pathIdentity; // 0x1 = 0x5D
unsigned char pathProximity; // 0x2 = 0x5E
CGFloat pathPressure; // 0x4 = 0x60
CGFloat pathMajorRadius; // 0x8 = 0x64
CGPoint pathLocation; // 0xC = 0x68
GSWindowRef pathWindow; // 0x14 = 0x70
} GSPathInfo; // sizeof = 0x18.
If I am looking down below the file (GSEvent.h)
GSPathInfo GSEventGetPathInfoAtIndex(GSEventRef event, CFIndex index);
I was wonder what I need to put in the GSEventRef event and CFIndex index
So I searched for GSEventRef scrolling to the top of the file I saw it is a pointer to __GSEvent
typedef struct __GSEvent* GSEventRef;
I am stuck here, what event do I create and how ...
GSEventRef* eventRef = malloc(sizeof(GSEventRef));
Or I need to do something like
__GSEvent* GSEventRef = malloc(sizeof(__GSEvent));
After allocating memory how do I set it ? I mean what value should I set for it?

Related

How can I convert from a character string to a hexadecimal one?

If I have a character string, how can I convert the values to hexadecimal in Objective-C? Likewise, how can I convert from a hexadecimal string to a character string?
As an exercise and in case it helps, I wrote a program to demonstrate how I might do this in pure C, which is 100% legal in Objective-C. I used the string-formatting functions in stdio.h to do the actual conversions.
Note that this can (should?) be tweaked for your setting. It will create a string twice as long as the passed-in string when going char->hex (converting 'Z' to '5a' for instance), and a string half as long going the other way.
I wrote this code in such a way that you can simply copy/paste and then compile/run to play around with it. Here is my sample output:
My favorite way to include C in XCode is to make a .h file with the function declarations separate from the .c file with implementation. See the comments:
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
// Place these prototypes in a .h to #import from wherever you need 'em
// Do not import the .c file anywhere.
// Note: You must free() these char *s
//
// allocates space for strlen(arg) * 2 and fills
// that space with chars corresponding to the hex
// representations of the arg string
char *makeHexStringFromCharString(const char*);
//
// allocates space for about 1/2 strlen(arg)
// and fills it with the char representation
char *makeCharStringFromHexString(const char*);
// this is just sample code
int main() {
char source[256];
printf("Enter a Char string to convert to Hex:");
scanf("%s", source);
char *output = makeHexStringFromCharString(source);
printf("converted '%s' TO: %s\n\n", source, output);
free(output);
printf("Enter a Hex string to convert to Char:");
scanf("%s", source);
output = makeCharStringFromHexString(source);
printf("converted '%s' TO: %s\n\n", source, output);
free(output);
}
// Place these in a .c file (named same as .h above)
// and include it in your target's build settings
// (should happen by default if you create the file in Xcode)
char *makeHexStringFromCharString(const char*input) {
char *output = malloc(sizeof(char) * strlen(input) * 2 + 1);
int i, limit;
for(i=0, limit = strlen(input); i<limit; i++) {
sprintf(output + (i*2), "%x", input[i]);
}
output[strlen(input)*2] = '\0';
return output;
}
char *makeCharStringFromHexString(const char*input) {
char *output = malloc(sizeof(char) * (strlen(input) / 2) + 1);
char sourceSnippet[3] = {[2]='\0'};
int i, limit;
for(i=0, limit = strlen(input); i<limit; i+=2) {
sourceSnippet[0] = input[i];
sourceSnippet[1] = input[i+1];
sscanf(sourceSnippet, "%x", (int *) (output + (i/2)));
}
output[strlen(input)/2+1] = '\0';
return output;
}

Struct and Thread DWORD WINAPI

What's up guys, hope you are ok !
well, the problem is that I'm doing a chat client/server aplication but doing some tests with the server, I found out that I have a problem sending messages. I'm using a struct, sockets and DWORD WINAPI threads...
So the code in the struct is:
DWORD WINAPI threadSendMessages(LPVOID vpParam); //THREAD
typedef struct messagesServerChat{ //STRUCT
const char *messageServEnv;
}MESSAGE, *SMESSAGES;
then in the main method I call the struct to use the const char messageServEnv, a HeapAlloc to give some memory to the thread that is going to send the message and a char variable that I use to store the message
char mServer[1024] = ""; //variable to pre-store the message
SMESSAGES messages; //call the struct
messages = (SMESSAGES) HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(MESSAGE));
in the main method, I ask the user to insert the message that he wants to send and I use the struct to store the message and send it to the thread as a parameter:
cout<<"Dear user, please insert your message: ";
setbuf(stdin, NULL);
fgets(mServer, 1024, stdin);
messages->messageServEnv = mServer;
DWORD hSend; //send the parameters to the thread function
HANDLE sendThread = CreateThread(0, 0, threadSendMessages, mServer, 0, &hSend);
and finally the thread code function
DWORD WINAPI threadSendMessages(LPVOID lpParam){
SMESSAGES messages;
messages = (SMESSAGES)lpParam;
int mesa;
mesa = send(sConnect, (char *)messages->messageServEnv, sizeof messages->messageServEnv, 0);
//sConnect is the socket
//messages = to use the struct, and messageServEnv is the struct data that should contain the message
return 0;
}
--Edit-- I fix a lot of problems using Remy's solution but maybe I'm missing something... in the Thread threadSendMessages(SMESSAGES lpMessage)
char *ptr = messages->messageServEnv;
int len = strlen(messages->messageServEnv);
I get and error that says messages is undifined, then, I changed to:
SMESSAGES messages;
char *ptr = messages->messageServEnv;
int len = strlen(messages->messageServEnv);
now I can use messages and struct value messageServEnv but if I start debugging visual studio and I try to send a message, I get an error that says messages is used without being initialized, then I change that part to
SMESSAGES messages = new MESSAGE;
and now I can send messages to client but only characters and garbage code
You need to dynamically allocate the memory for each message's string data and then have the thread free the memory when finished sending it.
You are also passing the wrong pointer to the lpParameter parameter of CreateThread(), you are passing your char[] buffer instead of your allocated MESSAGE struct.
You are also using sizeof() when calling send(). Since your messageServEnv is a char* pointer, sizeof() will return 4 (32-bit) or 8 (64-bit) instead of the actual size of the string that is being pointed at.
I would suggest moving the char[] buffer directly into the struct instead of using a pointer to an external buffer, eg:
typedef struct messagesServerChat
{
char messageServEnv[1024];
}
MESSAGE, *SMESSAGES;
DWORD WINAPI threadSendMessages(SMESSAGES lpMessage);
.
cout << "Dear user, please insert your message: ";
setbuf(stdin, NULL);
SMESSAGES message = new MESSAGE;
fgets(message->messageServEnv, sizeof(message->messageServEnv), stdin);
DWORD hSend;
HANDLE sendThread = CreateThread(0, 0, (LPTHREAD_START_ROUTINE)&threadSendMessages, message, 0, &hSend);
if (!sendThread)
delete message;
.
DWORD WINAPI threadSendMessages(SMESSAGES lpMessage)
{
// send() is not guaranteed to send the entire message
// in one go, so call it in a loop...
char *ptr = lpMessage->messageServEnv;
int len = strlen(lpMessage->messageServEnv); // or sizeof() if you really want to send all 1024 bytes instead
while (len > 0)
{
int mesa = send(sConnect, ptr, len, 0);
if (mesa > 0)
{
ptr += mesa;
len -= mesa;
continue;
}
// this is only needed if you are using a non-blocking socket...
if ((mesa == SOCKET_ERROR) && (WSAGetLastError() == WSAEWOULDBLOCK))
{
fd_set fd;
FD_ZERO(&fd);
FD_SET(sConnect, &fd);
timeval tv;
tv.tv_sec = 5;
tv.tv_usec = 0;
if (select(0, NULL, &fd, NULL, &tv) > 0)
continue;
}
... error handling ...
break;
}
delete message;
return 0;
}
If you want to pass a dynamically-lengthed string instead, you are better off using a std::string instead of a char[]:
typedef struct messagesServerChat
{
std::string messageServEnv;
}
MESSAGE, *SMESSAGES;
DWORD WINAPI threadSendMessages(SMESSAGES lpMessage);
.
cout << "Dear user, please insert your message: ";
setbuf(stdin, NULL);
SMESSAGES message = new MESSAGE;
getline(stdin, message->messageServEnv);
DWORD hSend;
HANDLE sendThread = CreateThread(0, 0, (LPTHREAD_START_ROUTINE)&threadSendMessages, message, 0, &hSend);
if (!sendThread)
delete message;
.
DWORD WINAPI threadSendMessages(SMESSAGES lpMessage)
{
// send() is not guaranteed to send the entire message
// in one go, so call it in a loop...
char *ptr = lpMessage->messageServEnv.c_str();
int len = lpMessage->messageServEnv.length(); // or sizeof() if you really want to send all 1024 bytes instead
while (len > 0)
{
int mesa = send(sConnect, ptr, len, 0);
if (mesa > 0)
{
ptr += mesa;
len -= mesa;
continue;
}
// this is only needed if you are using a non-blocking socket...
if ((mesa == SOCKET_ERROR) && (WSAGetLastError() == WSAEWOULDBLOCK))
{
fd_set fd;
FD_ZERO(&fd);
FD_SET(sConnect, &fd);
timeval tv;
tv.tv_sec = 5;
tv.tv_usec = 0;
if (select(0, NULL, &fd, NULL, &tv) > 0)
continue;
}
... error handling ...
break;
}
delete message;
return 0;
}

IPv6 raw socket programming with native C

I am working on IPv6 and need to craft an IPv6 packet from scratch and put it into a buffer. Unfortunately I do not have much experience with C. From a tutorial I have successfully done the same thing with IPv4 by defining
struct ipheader {
unsigned char iph_ihl:5, /* Little-endian */
iph_ver:4;
unsigned char iph_tos;
unsigned short int iph_len;
unsigned short int iph_ident;
unsigned char iph_flags;
unsigned short int iph_offset;
unsigned char iph_ttl;
unsigned char iph_protocol;
unsigned short int iph_chksum;
unsigned int iph_sourceip;
unsigned int iph_destip;
};
/* Structure of a TCP header */
struct tcpheader {
unsigned short int tcph_srcport;
unsigned short int tcph_destport;
unsigned int tcph_seqnum;
unsigned int tcph_acknum;
unsigned char tcph_reserved:4, tcph_offset:4;
// unsigned char tcph_flags;
unsigned int
tcp_res1:4, /*little-endian*/
tcph_hlen:4, /*length of tcp header in 32-bit words*/
tcph_fin:1, /*Finish flag "fin"*/
tcph_syn:1, /*Synchronize sequence numbers to start a connection*/
tcph_rst:1, /*Reset flag */
tcph_psh:1, /*Push, sends data to the application*/
tcph_ack:1, /*acknowledge*/
tcph_urg:1, /*urgent pointer*/
tcph_res2:2;
unsigned short int tcph_win;
unsigned short int tcph_chksum;
unsigned short int tcph_urgptr;
};
and fill the packet content in like this:
// IP structure
ip->iph_ihl = 5;
ip->iph_ver = 6;
ip->iph_tos = 16;
ip->iph_len = sizeof (struct ipheader) + sizeof (struct tcpheader);
ip->iph_ident = htons(54321);
ip->iph_offset = 0;
ip->iph_ttl = 64;
ip->iph_protocol = 6; // TCP
ip->iph_chksum = 0; // Done by kernel
// Source IP, modify as needed, spoofed, we accept through command line argument
ip->iph_sourceip = inet_addr("1922.168.1.128");
// Destination IP, modify as needed, but here we accept through command line argument
ip->iph_destip = inet_addr(1922.168.1.1);
// The TCP structure. The source port, spoofed, we accept through the command line
tcp->tcph_srcport = htons(atoi("1024"));
// The destination port, we accept through command line
tcp->tcph_destport = htons(atoi("4201"));
tcp->tcph_seqnum = htons(1);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32767);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
// IP checksum calculation
ip->iph_chksum = csum((unsigned short *) buffer, (sizeof (struct ipheader) + sizeof (struct tcpheader)));
However for IPv6 I have not find a similar way. What I already found is this struct from IETF,
struct ip6_hdr {
union {
struct ip6_hdrctl {
uint32_t ip6_un1_flow; /* 4 bits version, 8 bits TC, 20 bits
flow-ID */
uint16_t ip6_un1_plen; /* payload length */
uint8_t ip6_un1_nxt; /* next header */
uint8_t ip6_un1_hlim; /* hop limit */
} ip6_un1;
uint8_t ip6_un2_vfc; /* 4 bits version, top 4 bits
tclass */
} ip6_ctlun;
struct in6_addr ip6_src; /* source address */
struct in6_addr ip6_dst; /* destination address */
};
But I did not know how to fill in the information, for example, how to send a TCP/SYN from 2001:220:806:22:aacc:ff:fe00:1 port 1024 to 2001:220:806:21::4 port 1025?
Could anybody help me or is there any references?
Thank you vere much then.
this is what I have done so far, however there are mismatch between the code and the real packet captured by Wireshark (as discussed in comments below). I'm not sure it is possible to post a long code in comment section, so I just edit my question.
Anyone can help?
#define PCKT_LEN 2000
int main(void) {
unsigned char buffer[PCKT_LEN];
int s;
struct sockaddr_in6 din;
struct ipv6_header *ip = (struct ipv6_header *) buffer;
struct tcpheader *tcp = (struct tcpheader *) (buffer + sizeof (struct ipv6_header));
memset(buffer, 0, PCKT_LEN);
din.sin6_family = AF_INET6;
din.sin6_port = htons(0);
inet_pton(AF_INET6, "::1", &(din.sin6_addr)); // For routing
ip->version = 6;
ip->traffic_class = 0;
ip->flow_label = 0;
ip->length = 40;
ip->next_header = 6;
ip->hop_limit = 64;
inet_pton(AF_INET6, "::1", &(ip->dst)); // IPv6
inet_pton(AF_INET6, "::1", &(ip->src)); // IPv6
tcp->tcph_srcport = htons(atoi("11111"));
tcp->tcph_destport = htons(atoi("13"));
tcp->tcph_seqnum = htons(0);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32752);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
s = socket(PF_INET6, SOCK_RAW, IPPROTO_RAW);
if (s < 0) {
perror("socket()");
return 1;
}
unsigned short int packet_len = sizeof (struct ipv6_header) + sizeof (struct tcpheader);
if (sendto(s, buffer, packet_len, 0, (struct sockaddr*) &din, sizeof (din)) == -1) {
perror("sendto()");
close(s);
return 1;
}
close(s);
return 0;
}
Maybe this article can help you getting started?
Edit:
Using the wikipedia article linked above I made this structure (without knowing what some of the fields means):
struct ipv6_header
{
unsigned int
version : 4,
traffic_class : 8,
flow_label : 20;
uint16_t length;
uint8_t next_header;
uint8_t hop_limit;
struct in6_addr src;
struct in6_addr dst;
};
It's no different than how the header-struct was made for IPv4 in your example. Just create a struct containing the fields, in the right order and in the right size, and fill it with the right values.
Just do the same for the TCP headers.
Unfortunately the ipv6 RFCs don't provide the same raw socket interface that you get with ipv4. From what i've seen to create ipv6 packets you have to go a level deeper and use an AF_PACKET socket to send an ethernet frame including your ipv6 packet.

How to resolve File not availabe

This is the code I am using for the encryption but it generate an error
"CCKeyDerivationPBKDF is unavailable" in AESKeyForPassword method though it is declare before implementation. How to Resolve it.
#ifndef _CC_PBKDF_H_
#define _CC_PBKDF_H_
#include <sys/types.h>
#include <sys/param.h>
#include <string.h>
#include <limits.h>
#include <stdlib.h>
#include <Availability.h>
#include <CommonCrypto/CommonDigest.h>
#include <CommonCrypto/CommonHMAC.h>
#ifdef __cplusplus
extern "C" {
#endif
enum {
kCCPBKDF2 = 2,
};
typedef uint32_t CCPBKDFAlgorithm;
enum {
kCCPRFHmacAlgSHA1 = 1,
kCCPRFHmacAlgSHA224 = 2,
kCCPRFHmacAlgSHA256 = 3,
kCCPRFHmacAlgSHA384 = 4,
kCCPRFHmacAlgSHA512 = 5,
};
typedef uint32_t CCPseudoRandomAlgorithm;
/*
#function CCKeyDerivationPBKDF
#abstract Derive a key from a text password/passphrase
#param algorithm Currently only PBKDF2 is available via kCCPBKDF2
#param password The text password used as input to the derivation
function. The actual octets present in this string
will be used with no additional processing. It's
extremely important that the same encoding and
normalization be used each time this routine is
called if the same key is expected to be derived.
#param passwordLen The length of the text password in bytes.
#param salt The salt byte values used as input to the derivation
function.
#param saltLen The length of the salt in bytes.
#param prf The Pseudo Random Algorithm to use for the derivation
iterations.
#param rounds The number of rounds of the Pseudo Random Algorithm
to use.
#param derivedKey The resulting derived key produced by the function.
The space for this must be provided by the caller.
#param derivedKeyLen The expected length of the derived key in bytes.
#discussion The following values are used to designate the PRF:
* kCCPRFHmacAlgSHA1
* kCCPRFHmacAlgSHA224
* kCCPRFHmacAlgSHA256
* kCCPRFHmacAlgSHA384
* kCCPRFHmacAlgSHA512
#result kCCParamError can result from bad values for the password, salt,
and unwrapped key pointers as well as a bad value for the prf function.
*/
int CCKeyDerivationPBKDF( CCPBKDFAlgorithm algorithm, const char *password, size_t passwordLen,
const uint8_t *salt, size_t saltLen,
CCPseudoRandomAlgorithm prf, uint rounds,
uint8_t *derivedKey, size_t derivedKeyLen)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
/*
* All lengths are in bytes - not bits.
*/
/*
#function CCCalibratePBKDF
#abstract Determine the number of PRF rounds to use for a specific delay on
the current platform.
#param algorithm Currently only PBKDF2 is available via kCCPBKDF2
#param passwordLen The length of the text password in bytes.
#param saltLen The length of the salt in bytes.
#param prf The Pseudo Random Algorithm to use for the derivation
iterations.
#param derivedKeyLen The expected length of the derived key in bytes.
#param msec The targetted duration we want to achieve for a key
derivation with these parameters.
#result the number of iterations to use for the desired processing time.
*/
uint CCCalibratePBKDF(CCPBKDFAlgorithm algorithm, size_t passwordLen, size_t saltLen,
CCPseudoRandomAlgorithm prf, size_t derivedKeyLen, uint32_t msec)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
#ifdef __cplusplus
}
#endif
#endif /* _CC_PBKDF_H_ */
#import "AESEncryption.h"
#import <CommonCrypto/CommonCryptor.h>
//#import <CommonCrypto/CommonKeyDerivation.h>
//#import <CommonKeyDerivation.h>
#implementation AESEncryption
NSString * const
kRNCryptManagerErrorDomain = #"net.robnapier.RNCryptManager";
const CCAlgorithm kAlgorithm = kCCAlgorithmAES128;
const NSUInteger kAlgorithmKeySize = kCCKeySizeAES128;
const NSUInteger kAlgorithmBlockSize = kCCBlockSizeAES128;
const NSUInteger kAlgorithmIVSize = kCCBlockSizeAES128;
const NSUInteger kPBKDFSaltSize = 8;
const NSUInteger kPBKDFRounds = 1000;//0; // ~80ms on an iPhone 4
// ===================
+ (NSData *)encryptedDataForData:(NSData *)data
password:(NSString *)password
iv:(NSData **)iv
salt:(NSData **)salt
error:(NSError **)error {
NSAssert(iv, #"IV must not be NULL");
NSAssert(salt, #"salt must not be NULL");
*iv = [self randomDataOfLength:kAlgorithmIVSize];
*salt = [self randomDataOfLength:kPBKDFSaltSize];
NSData *key = [self AESKeyForPassword:password salt:*salt];
size_t outLength;
NSMutableData *
cipherData = [NSMutableData dataWithLength:data.length +
kAlgorithmBlockSize];
CCCryptorStatus
result = CCCrypt(kCCEncrypt, // operation
kAlgorithm, // Algorithm
kCCOptionPKCS7Padding, // options
key.bytes, // key
key.length, // keylength
(*iv).bytes,// iv
data.bytes, // dataIn
data.length, // dataInLength,
cipherData.mutableBytes, // dataOut
cipherData.length, // dataOutAvailable
&outLength); // dataOutMoved
if (result == kCCSuccess) {
cipherData.length = outLength;
}
else {
if (error) {
*error = [NSError errorWithDomain:kRNCryptManagerErrorDomain
code:result
userInfo:nil];
}
return nil;
}
return cipherData;
}
// ===================
+ (NSData *)randomDataOfLength:(size_t)length {
NSMutableData *data = [NSMutableData dataWithLength:length];
int result = SecRandomCopyBytes(kSecRandomDefault, length,data.mutableBytes);
NSLog(#"%d",result);
NSAssert1(result == 0, #"Unable to generate random bytes: %d", errno);
//NSAssert( #"Unable to generate random bytes: %d", errno);
return data;
}
// ===================
// Replace this with a 10,000 hash calls if you don't have CCKeyDerivationPBKDF
+ (NSData *)AESKeyForPassword:(NSString *)password
salt:(NSData *)salt {
NSMutableData *
derivedKey = [NSMutableData dataWithLength:kAlgorithmKeySize];
int result = CCKeyDerivationPBKDF(kCCPBKDF2, // algorithm
password.UTF8String, // password
password.length, // passwordLength
salt.bytes, // salt
salt.length, // saltLen
kCCPRFHmacAlgSHA1, // PRF
kPBKDFRounds, // rounds
derivedKey.mutableBytes, // derivedKey
derivedKey.length); // derivedKeyLen
NSLog(#"%d",result);
// Do not log password here
NSAssert1(result == kCCSuccess,#"Unable to create AES key for password: %d", result);
//NSAssert(#"Unable to create AES key for password: %d", result);
return derivedKey;
}
#end
The code placed above implementation is of CommonCrypto/CommonKeyDerivation.h which was not found be me xcode and hence I put code directly at the top.
Try to comment out this lines:
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
I think they limit the method to a particular Operating System and this is exactly what you don't need.
But I cannot guarantee if further issues may appear. I'm trying to achieve the same.
You have merely declared 2 prototypes for CCKeyDerivationPBKDF and CCCalibratePBKDF. Either put the full code for the functions at this place or declare them as extern and have them in a seperate module or library.

Transmission of float values over TCP/IP and data corruption

I have an extremely strange bug.
I have two applications that communicate over TCP/IP.
Application A is the server, and application B is the client.
Application A sends a bunch of float values to application B every 100 milliseconds.
The bug is the following: sometimes some of the float values received by application B are not the same as the values transmitted by application A.
Initially, I thought there was a problem with the Ethernet or TCP/IP drivers (some sort of data corruption). I then tested the code in other Windows machines, but the problem persisted.
I then tested the code on Linux (Ubuntu 10.04.1 LTS) and the problem is still there!!!
The values are logged just before they are sent and just after they are received.
The code is pretty straightforward: the message protocol has a 4 byte header like this:
//message header
struct MESSAGE_HEADER {
unsigned short type;
unsigned short length;
};
//orientation message
struct ORIENTATION_MESSAGE : MESSAGE_HEADER
{
float azimuth;
float elevation;
float speed_az;
float speed_elev;
};
//any message
struct MESSAGE : MESSAGE_HEADER {
char buffer[512];
};
//receive specific size of bytes from the socket
static int receive(SOCKET socket, void *buffer, size_t size) {
int r;
do {
r = recv(socket, (char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//send specific size of bytes to a socket
static int send(SOCKET socket, const void *buffer, size_t size) {
int r;
do {
r = send(socket, (const char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//get message from socket
static bool receive(SOCKET socket, MESSAGE &msg) {
int r = receive(socket, &msg, sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
if (ntohs(msg.length) == 0) return true;
r = receive(socket, msg.buffer, ntohs(msg.length));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
//send message
static bool send(SOCKET socket, const MESSAGE &msg) {
int r = send(socket, &msg, ntohs(msg.length) + sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
When I receive the message 'orientation', sometimes the 'azimuth' value is different from the one sent by the server!
Shouldn't the data be the same all the time? doesn't TCP/IP guarantee delivery of the data uncorrupted? could it be that an exception in the math co-processor affects the TCP/IP stack? is it a problem that I receive a small number of bytes first (4 bytes) and then the message body?
EDIT:
The problem is in the endianess swapping routine. The following code swaps the endianess of a specific float around, and then swaps it again and prints the bytes:
#include <iostream>
using namespace std;
float ntohf(float f)
{
float r;
unsigned char *s = (unsigned char *)&f;
unsigned char *d = (unsigned char *)&r;
d[0] = s[3];
d[1] = s[2];
d[2] = s[1];
d[3] = s[0];
return r;
}
int main() {
unsigned long l = 3206974079;
float f1 = (float &)l;
float f2 = ntohf(ntohf(f1));
unsigned char *c1 = (unsigned char *)&f1;
unsigned char *c2 = (unsigned char *)&f2;
printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]);
printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]);
getchar();
return 0;
}
The output is:
7F 8A 26 BF
7F CA 26 BF
I.e. the float assignment probably normalizes the value, producing a different value from the original.
Any input on this is welcomed.
EDIT2:
Thank you all for your replies. It seems the problem is that the swapped float, when returned via the 'return' statement, is pushed in the CPU's floating point stack. The caller then pops the value from the stack, the value is rounded, but it is the swapped float, and therefore the rounding messes up the value.
TCP tries to deliver unaltered bytes, but unless the machines have similar CPU-s and operating-systems, there's no guarantee that the floating-point representation on one system is identical to that on the other. You need a mechanism for ensuring this such as XDR or Google's protobuf.
You're sending binary data over the network, using implementation-defined padding for the struct layout, so this will only work if you're using the same hardware, OS and compiler for both application A and application B.
If that's ok, though, I can't see anything wrong with your code. One potential issue is that you're using ntohs to extract the length of the message and that length is the total length minus the header length, so you need to make sure you setting it properly. It needs to be done as
msg.length = htons(sizeof(ORIENTATION_MESSAGE) - sizeof(MESSAGE_HEADER));
but you don't show the code that sets up the message...