Swift: signed integer 2's complement - swift

I have to port the following Java code in Swift.
// Java code
byte [] nodeId = [47, -29, 4, -121]
int numNodeID = 0;
for(int i=0; i < 4; i++){
numNodeID |= ((nodeId[i]&0x000000FFL) << (i*8));
}
I wrote
// Swift code
var nodeId: [Int8] = [47, -29, 4, -121]
var numNodeId: Int = 0
for index in 0..<4 {
numNodeId |= ((Int(nodeId[index]) & 255) << (index*8))
}
The results are different. In Java I obtain the int -2029722833, in swift the int 2265244463. The java result is the signed 2's complement of the swift one.
How can I obtain the signed integer (-2029722833) also in swift?

You're currently thinking that int translates to Int. But no!
On 32-bit platforms, Int is the same size as Int32, and on 64-bit
platforms, Int is the same size as Int64.
I can't tell what your code is actually doing, but a direct translation requires Int32 .

Related

Fatal error: Not enough bits to represent the passed value (Int16) in Swift

I am translating a library from Java (Android) to -> Swift (iPhone)
Java code, works ok:
long a = 48590108397870l;
short b = ((short)(a & 65535));//b == -28370
Swift code:
let a : Int64 = 48590108397870
let b: Int16 = Int16(a & 65535)//Fatal error: Not enough bits to represent the passed value
a & 65535 is a value between 0 and 216-1, which fits in an UInt16, but not in an Int16. Contrary to many other languages, Swift does not truncate values implicitly.
Integers have an init(truncatingIfNeeded:) initializer which does what you want:
When the bit width of T (the type of source) is equal to or greater than this type’s bit width, the result is the truncated least-significant bits of source.
Example:
let a : Int64 = 48590108397870
let b = Int16(truncatingIfNeeded: a)
print(b) // -28370
Another option is to create an unsigned integer first, which is then converted to a signed integer with the same bit pattern:
let a : Int64 = 48590108397870
let b = Int16(bitPattern: UInt16(a & 0xFFFF))
print(b) // -28370
You can use withUnsafeBytes(of:_:) to convert the types.
Code:
let a: Int64 = 48590108397870
let b = a & 65535
let c: Int16? = withUnsafeBytes(of: b) { ptr -> Int16? in
let binded = ptr.bindMemory(to: Int16.self)
return binded.first
}
print("c: \(c)")
// Prints: c: Optional(-28370)

access element of fixed length C array in swift

I'm trying to convert some C code to swift.
(Why? - to use CoreMIDI in OS-X in case you asked)
The C code is like this
void printPacketInfo(const MIDIPacket* packet) {
int i;
for (i=0; i<packet->length; i++) {
printf("%d ", packet->data[i]);
}
}
And MIDIPacket is defined like this
struct MIDIPacket
{
MIDITimeStamp timeStamp;
UInt16 length;
Byte data[256];
};
My Swift is like this
func printPacketInfo(packet: UnsafeMutablePointer<MIDIPacket>){
// print some things
print("length", packet.memory.length)
print("time", packet.memory.timeStamp)
print("data[0]", packet.memory.data.1)
for i in 0 ..< packet.memory.length {
print("data", i, packet.memory.data[i])
}
}
But this gives a compiler error
error: type '(UInt8, UInt8, .. cut .. UInt8, UInt8, UInt8)'
has no subscript members
So how can I dereference the I'th element of a fixed size array?
in your case you could try to use something like this ...
// this is tuple with 8 Int values, in your case with 256 Byte (UInt8 ??) values
var t = (1,2,3,4,5,6,7,8)
t.0
t.1
// ....
t.7
func arrayFromTuple<T,R>(tuple:T) -> [R] {
let reflection = Mirror(reflecting: tuple)
var arr : [R] = []
for i in reflection.children {
// better will be to throw an Error if i.value is not R
arr.append(i.value as! R)
}
return arr
}
let arr:[Int] = arrayFromTuple(t)
print(arr) // [1, 2, 3, 4, 5, 6, 7, 8]
...
let t2 = ("alfa","beta","gama")
let arr2:[String] = arrayFromTuple(t2)
arr2[1] // "beta"
This was suggested by https://gist.github.com/jckarter/ec630221890c39e3f8b9
func printPacketInfo(packet: UnsafeMutablePointer<MIDIPacket>){
// print some things
print("length", packet.memory.length)
print("time", packet.memory.timeStamp)
let len = Int(packet.memory.length)
withUnsafePointer(&packet.memory.data) { p in
let p = UnsafeMutablePointer<UInt8>(p)
for i:Int in 0 ..< len {
print(i, p[i])
}
}
}
This is horrible - I hope the compiler turns this nonsense into some good code
The error message is a hint: it shows that MIDIPacket.data is imported not as an array, but as a tuple. (Yes, that's how all fixed length arrays import in Swift.) You seem to have noticed this in the preceding line:
print("data[0]", packet.memory.data.1)
Tuples in Swift are very static, so there isn't a way to dynamically access a tuple element. Thus, in some sense the only "safe" or idiomatic way to print your packet (in the way that you're hinting at) would be 256 lines of code (or up to 256, since the packet's length field tells you when it's safe to stop):
print("data[1]", packet.memory.data.2)
print("data[2]", packet.memory.data.3)
print("data[3]", packet.memory.data.4)
/// ...
print("data[254]", packet.memory.data.255)
print("data[255]", packet.memory.data.256)
Clearly that's not a great solution. Using reflection, per user3441734's answer, is one (cumbersome) alternative. Unsafe memory access, per your own answer (via jckarter), is another (but as the name of the API says, it's "unsafe"). And, of course, you can always work with the packet through (Obj)C.
If you need to do something beyond printing the packet, you can extend the UnsafePointer-based solution to convert it to an array like so:
extension MIDIPacket {
var dataBytes: [UInt8] {
mutating get {
return withUnsafePointer(&data) { tuplePointer in
let elementPointer = UnsafePointer<UInt8>(tuplePointer)
return (0..<Int(length)).map { elementPointer[$0] }
}
}
}
}
Notice that this uses the packet's existing length property to expose an array that has only as many valid bytes as the packet claims to have (rather than filling up the rest of a 256-element array with zeroes). This does allocate memory, however, so it might not be good for the kinds of real-time run conditions you might be using CoreMIDI in.
Should this:
for i in 0 ..< packet.memory.length
Be this?
for i in 0 ..< packet.memory.data.length

Binary operator < cannot be applied to Clong in Swift

I am trying to implement the following code in swift. But my i variable refuse to talk to my MAXADDRS. It says binary operator < cannot be applied to Clong in Swift. If I use CInt the problem goes away, but then I get an error on the variable i when assiginin theAddr = ip_addrs[i]
InitAddresses();
GetIPAddresses();
GetHWAddresses();
var i = CLong()
var deviceIP = NSString()
for (i=0; i < MAXADDRS; ++i)
{
var localHost = 0x7F000001; // 127.0.0.1
var theAddr = CLong()
theAddr = ip_addrs[i]
if (theAddr == 0) {return}
if (theAddr == localHost){continue}
NSLog("Name: %s MAC: %s IP: %s\n", if_names[i], hw_addrs[i], ip_names[i]);
//decided what adapter you want details for
if (strncmp(if_names[i], "en", 2) == 0)
{
NSLog("Adapter en has a IP of %s", ip_names[i]);
}
}
// Do any additional setup after loading the view, typically from a nib.
}
The MAXADDRS it intends to compare relates to the following OBC header
Source files here
http://www.chrisandtennille.com/code/IPAddress.h
http://www.chrisandtennille.com/code/IPAddress.c
My bridging header
#include "IPAddress.h"
#include "IPAddress.c"
#define MAXADDRS 32
is imported to Swift as
public var MAXADDRS: Int32 { get }
On the other hand, CLong is an alias for Int ("The C 'long' type.")
Therefore you need to convert all values to a common type. Since
array subscripting requires an Int index, converting MAXADDRS
to Int might be the easiest solution:
var i = 0 // Int
for (i=0; i < Int(MAXADDRS); ++i) {
}
or more simply:
for i in 0 ..< Int(MAXADDRS) {
}

IPv6 raw socket programming with native C

I am working on IPv6 and need to craft an IPv6 packet from scratch and put it into a buffer. Unfortunately I do not have much experience with C. From a tutorial I have successfully done the same thing with IPv4 by defining
struct ipheader {
unsigned char iph_ihl:5, /* Little-endian */
iph_ver:4;
unsigned char iph_tos;
unsigned short int iph_len;
unsigned short int iph_ident;
unsigned char iph_flags;
unsigned short int iph_offset;
unsigned char iph_ttl;
unsigned char iph_protocol;
unsigned short int iph_chksum;
unsigned int iph_sourceip;
unsigned int iph_destip;
};
/* Structure of a TCP header */
struct tcpheader {
unsigned short int tcph_srcport;
unsigned short int tcph_destport;
unsigned int tcph_seqnum;
unsigned int tcph_acknum;
unsigned char tcph_reserved:4, tcph_offset:4;
// unsigned char tcph_flags;
unsigned int
tcp_res1:4, /*little-endian*/
tcph_hlen:4, /*length of tcp header in 32-bit words*/
tcph_fin:1, /*Finish flag "fin"*/
tcph_syn:1, /*Synchronize sequence numbers to start a connection*/
tcph_rst:1, /*Reset flag */
tcph_psh:1, /*Push, sends data to the application*/
tcph_ack:1, /*acknowledge*/
tcph_urg:1, /*urgent pointer*/
tcph_res2:2;
unsigned short int tcph_win;
unsigned short int tcph_chksum;
unsigned short int tcph_urgptr;
};
and fill the packet content in like this:
// IP structure
ip->iph_ihl = 5;
ip->iph_ver = 6;
ip->iph_tos = 16;
ip->iph_len = sizeof (struct ipheader) + sizeof (struct tcpheader);
ip->iph_ident = htons(54321);
ip->iph_offset = 0;
ip->iph_ttl = 64;
ip->iph_protocol = 6; // TCP
ip->iph_chksum = 0; // Done by kernel
// Source IP, modify as needed, spoofed, we accept through command line argument
ip->iph_sourceip = inet_addr("1922.168.1.128");
// Destination IP, modify as needed, but here we accept through command line argument
ip->iph_destip = inet_addr(1922.168.1.1);
// The TCP structure. The source port, spoofed, we accept through the command line
tcp->tcph_srcport = htons(atoi("1024"));
// The destination port, we accept through command line
tcp->tcph_destport = htons(atoi("4201"));
tcp->tcph_seqnum = htons(1);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32767);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
// IP checksum calculation
ip->iph_chksum = csum((unsigned short *) buffer, (sizeof (struct ipheader) + sizeof (struct tcpheader)));
However for IPv6 I have not find a similar way. What I already found is this struct from IETF,
struct ip6_hdr {
union {
struct ip6_hdrctl {
uint32_t ip6_un1_flow; /* 4 bits version, 8 bits TC, 20 bits
flow-ID */
uint16_t ip6_un1_plen; /* payload length */
uint8_t ip6_un1_nxt; /* next header */
uint8_t ip6_un1_hlim; /* hop limit */
} ip6_un1;
uint8_t ip6_un2_vfc; /* 4 bits version, top 4 bits
tclass */
} ip6_ctlun;
struct in6_addr ip6_src; /* source address */
struct in6_addr ip6_dst; /* destination address */
};
But I did not know how to fill in the information, for example, how to send a TCP/SYN from 2001:220:806:22:aacc:ff:fe00:1 port 1024 to 2001:220:806:21::4 port 1025?
Could anybody help me or is there any references?
Thank you vere much then.
this is what I have done so far, however there are mismatch between the code and the real packet captured by Wireshark (as discussed in comments below). I'm not sure it is possible to post a long code in comment section, so I just edit my question.
Anyone can help?
#define PCKT_LEN 2000
int main(void) {
unsigned char buffer[PCKT_LEN];
int s;
struct sockaddr_in6 din;
struct ipv6_header *ip = (struct ipv6_header *) buffer;
struct tcpheader *tcp = (struct tcpheader *) (buffer + sizeof (struct ipv6_header));
memset(buffer, 0, PCKT_LEN);
din.sin6_family = AF_INET6;
din.sin6_port = htons(0);
inet_pton(AF_INET6, "::1", &(din.sin6_addr)); // For routing
ip->version = 6;
ip->traffic_class = 0;
ip->flow_label = 0;
ip->length = 40;
ip->next_header = 6;
ip->hop_limit = 64;
inet_pton(AF_INET6, "::1", &(ip->dst)); // IPv6
inet_pton(AF_INET6, "::1", &(ip->src)); // IPv6
tcp->tcph_srcport = htons(atoi("11111"));
tcp->tcph_destport = htons(atoi("13"));
tcp->tcph_seqnum = htons(0);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32752);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
s = socket(PF_INET6, SOCK_RAW, IPPROTO_RAW);
if (s < 0) {
perror("socket()");
return 1;
}
unsigned short int packet_len = sizeof (struct ipv6_header) + sizeof (struct tcpheader);
if (sendto(s, buffer, packet_len, 0, (struct sockaddr*) &din, sizeof (din)) == -1) {
perror("sendto()");
close(s);
return 1;
}
close(s);
return 0;
}
Maybe this article can help you getting started?
Edit:
Using the wikipedia article linked above I made this structure (without knowing what some of the fields means):
struct ipv6_header
{
unsigned int
version : 4,
traffic_class : 8,
flow_label : 20;
uint16_t length;
uint8_t next_header;
uint8_t hop_limit;
struct in6_addr src;
struct in6_addr dst;
};
It's no different than how the header-struct was made for IPv4 in your example. Just create a struct containing the fields, in the right order and in the right size, and fill it with the right values.
Just do the same for the TCP headers.
Unfortunately the ipv6 RFCs don't provide the same raw socket interface that you get with ipv4. From what i've seen to create ipv6 packets you have to go a level deeper and use an AF_PACKET socket to send an ethernet frame including your ipv6 packet.

Transmission of float values over TCP/IP and data corruption

I have an extremely strange bug.
I have two applications that communicate over TCP/IP.
Application A is the server, and application B is the client.
Application A sends a bunch of float values to application B every 100 milliseconds.
The bug is the following: sometimes some of the float values received by application B are not the same as the values transmitted by application A.
Initially, I thought there was a problem with the Ethernet or TCP/IP drivers (some sort of data corruption). I then tested the code in other Windows machines, but the problem persisted.
I then tested the code on Linux (Ubuntu 10.04.1 LTS) and the problem is still there!!!
The values are logged just before they are sent and just after they are received.
The code is pretty straightforward: the message protocol has a 4 byte header like this:
//message header
struct MESSAGE_HEADER {
unsigned short type;
unsigned short length;
};
//orientation message
struct ORIENTATION_MESSAGE : MESSAGE_HEADER
{
float azimuth;
float elevation;
float speed_az;
float speed_elev;
};
//any message
struct MESSAGE : MESSAGE_HEADER {
char buffer[512];
};
//receive specific size of bytes from the socket
static int receive(SOCKET socket, void *buffer, size_t size) {
int r;
do {
r = recv(socket, (char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//send specific size of bytes to a socket
static int send(SOCKET socket, const void *buffer, size_t size) {
int r;
do {
r = send(socket, (const char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//get message from socket
static bool receive(SOCKET socket, MESSAGE &msg) {
int r = receive(socket, &msg, sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
if (ntohs(msg.length) == 0) return true;
r = receive(socket, msg.buffer, ntohs(msg.length));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
//send message
static bool send(SOCKET socket, const MESSAGE &msg) {
int r = send(socket, &msg, ntohs(msg.length) + sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
When I receive the message 'orientation', sometimes the 'azimuth' value is different from the one sent by the server!
Shouldn't the data be the same all the time? doesn't TCP/IP guarantee delivery of the data uncorrupted? could it be that an exception in the math co-processor affects the TCP/IP stack? is it a problem that I receive a small number of bytes first (4 bytes) and then the message body?
EDIT:
The problem is in the endianess swapping routine. The following code swaps the endianess of a specific float around, and then swaps it again and prints the bytes:
#include <iostream>
using namespace std;
float ntohf(float f)
{
float r;
unsigned char *s = (unsigned char *)&f;
unsigned char *d = (unsigned char *)&r;
d[0] = s[3];
d[1] = s[2];
d[2] = s[1];
d[3] = s[0];
return r;
}
int main() {
unsigned long l = 3206974079;
float f1 = (float &)l;
float f2 = ntohf(ntohf(f1));
unsigned char *c1 = (unsigned char *)&f1;
unsigned char *c2 = (unsigned char *)&f2;
printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]);
printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]);
getchar();
return 0;
}
The output is:
7F 8A 26 BF
7F CA 26 BF
I.e. the float assignment probably normalizes the value, producing a different value from the original.
Any input on this is welcomed.
EDIT2:
Thank you all for your replies. It seems the problem is that the swapped float, when returned via the 'return' statement, is pushed in the CPU's floating point stack. The caller then pops the value from the stack, the value is rounded, but it is the swapped float, and therefore the rounding messes up the value.
TCP tries to deliver unaltered bytes, but unless the machines have similar CPU-s and operating-systems, there's no guarantee that the floating-point representation on one system is identical to that on the other. You need a mechanism for ensuring this such as XDR or Google's protobuf.
You're sending binary data over the network, using implementation-defined padding for the struct layout, so this will only work if you're using the same hardware, OS and compiler for both application A and application B.
If that's ok, though, I can't see anything wrong with your code. One potential issue is that you're using ntohs to extract the length of the message and that length is the total length minus the header length, so you need to make sure you setting it properly. It needs to be done as
msg.length = htons(sizeof(ORIENTATION_MESSAGE) - sizeof(MESSAGE_HEADER));
but you don't show the code that sets up the message...