C How can I write a value to an entire data structure, rather than just an element? - type-conversion

I am attempting to write a little C test program for reading data from a vending machine. I have the circuit hooked up, now the hard part is the code.
The machine uses UART that calls for 9 total data bits. Here is my code. Attempting to write directly to the full 9 bit data type does not work, but writing to an element of it does.
struct nineBit { //To make 9 bit character types for an array.
unsigned int data : 8; //8 data bits.
unsigned int mode : 1; //1 'mode' bit.
} data[35]; //Make an array of 9 bit data to store the incoming data block.
void setup() {
Serial1.begin(9600, SERIAL_9N1); //Start the UART.
}
void loop() {
data[0] = Serial1.read(); //Works if 'data[0].data is entered instead.
//How can I transfer this all in one command?
}
Errors are
rx.cpp: In function 'void loop()':
rx.cpp:11:12: error: no match for 'operator=' (operand types are 'nineBit' and 'int')
void setup() {
^
rx.cpp:11:12: note: candidates are:
rx.cpp:1:8: note: nineBit& nineBit::operator=(const nineBit&)
^
rx.cpp:1:8: note: no known conversion for argument 1 from 'int' to 'const nineBit&'
rx.cpp:1:8: note: nineBit& nineBit::operator=(nineBit&&)
rx.cpp:1:8: note: no known conversion for argument 1 from 'int' to 'nineBit&&'
make[1]: *** [../build/target/user/platform-6rx.o] Error 1
make: *** [user] Error 2
Error: Could not compile. Please review your code.

I assume you are using arduino or something similar. So Serial1.read() returns char. char is a signed 1 byte (8 bits) field. And your struct nineBit has 9 bits. How do you expect of writing 8 bits to 9 bited structure?
A note about your structure: it doesn't have size equal 9 bits. Instance of any variable can have size evaluated in bytes only. So if you want to store 9 bits you have to create a two byted structure or more.
And in fact sizeof(nineBit) equals 4 because your bit field have unsigned int type. If you want to reduce size of your structure you have to change bit field type to either short or char.
Let's assume your serial transports two bytes per every structure. So you have to read two bytes and then assign them:
struct nineBit {
char data : 8; //8 data bits.
char mode : 1; //1 'mode' bit.
} data[35];
void setup() {
Serial1.begin(9600, SERIAL_9N1); //Start the UART.
}
void loop() {
char byte1=Serial1.read();
char byte2=Serial1.read();
data[0].data=byte1;
data[0].mode=byte2;
}
If you want to use only a single line you have to write a C function or overload operator= if you use C++.
C way
struct nineBit {
char data : 8; //8 data bits.
char mode : 1; //1 'mode' bit.
} data[35];
void writeToNineBit(struct nineBit *value){
char byte1=Serial1.read();
char byte2=Serial1.read();
value->data=byte1;
value->mode=byte2;
}
void setup() {
Serial1.begin(9600, SERIAL_9N1); //Start the UART.
}
void loop() {
writeToNineBit(data+0); // or &data[0].. 0 is an index in array..
}
C++ way
struct nineBit {
char data : 8; //8 data bits.
char mode : 1; //1 'mode' bit.
// assume you have to assign data without mode..
nineBit& operator=(char b){
this->data=b;
}
} data[35];
void setup() {
Serial1.begin(9600, SERIAL_9N1); //Start the UART.
}
void loop() {
data[0]=Serial1.read(); // now it works cause you have operator overloading in your structure..
}

Related

How to parse extended integer type in python C extension module?

I am trying to pass a (large) integer from python to an extension module, but I am unable to parse pythons arbitrary precision integers to 256-bit unsigned integers uint256. Here is the C callee:
#include <Python.h>
typedef unsigned _ExtInt(256) uint256;
static PyObject* test(PyObject* self, PyObject* args)
{
uint256 x;
if(!PyArg_ParseTuple(args, "O", &x)) {
puts("Could not parse the python arg");
return NULL;
}
// simple addition
x += (uint256) 1;
return Py_BuildValue("O", x);
}
// ... initalize extension module here ...
In python I run something like
import extension_module
extension_module.test(1)
And I get the error:
Bus error: 10
Or
Segmentation fault: 11
However, if I remove the simple addition x += (uint256) 1; it will atleast not throw any error and return the argument.
How do I parse extended-integer types in my C extension module?

How do I copy 16 bytes from a Data into a uuid_t?

Given a variable of type Data, how do I copy 16 bytes out of it and directly into a variable of type uuid_t?
I'm writing some Swift code that exchanges data with an external service using Google Protocol Buffers. The service returns a data structure that contains two properties: an Int representing a count and an Array of raw bytes Uint8 representing sequential UUIDs of 16 bytes each. In Swift, this is represented as the following struct:
struct UUIDCollection {
var count:Int // Number of 16 byte identifiers.
var identifiers:Data // Array of bytes where every group of 16 bytes, starting at index 0, is a uuid.
}
I'm unable to figure out the correct usage of Swift pointers to allow me to do something like this:
for i in 0..<count {
let offset = Int(i * 16)
var bytes:uuid_t
let range = offset..<(offset + 16)
withUnsafeMutablePointer(to:&bytes) { (b:UnsafeMutablePointer<UInt8>) -> Void in
identifiers.copyBytes(to:b, from:range)
let uuid = UUID(uuid:bytes)
print("UUID: \(uuid.uuidString)")
}
}
The Xcode error I receive is:
Cannot convert value of type '(UnsafeMutablePointer) -> Void'
to expected argument type '(UnsafeMutablePointer<_>) -> _'
What is the correct, and ideally most efficient, way of converting such an array of bytes into an array of uuid_t?
Note: The Swift code is designed to work with an existing API, which vends identifiers as a single array of bytes. Changing that API to vend a vector of identifiers or a vector of string UUIDs isn't really an option at the moment.
You can stride your identifiers subdata and load your UUIDs as follow:
extension Data {
func object<T>() -> T { withUnsafeBytes { $0.load(as: T.self) } }
}
extension UUIDCollection {
var uuids: [UUID] {
stride(from: 0, to: count * 16, by: 16)
.map { identifiers[$0..<$0.advanced(by: 16)].object() }
}
}

Freeglut doesn't initialize when using it from Swift

I've tried to use the Freeglut library in a Swift 4 Project. When the
void glutInit(int *argcp, char **argv);
function is shifted to Swift, its declaration is
func glutInit(_ pargc: UnsafeMutablePointer<Int32>!, _ argv: UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>!)
Since I don't need the real arguments from the command line I want to make up the two arguments. I tried to define **argv in the Bridging-Header.h file
#include <OpenGL/gl.h>
#include <GL/glut.h>
char ** argv[1] = {"t"};
and use them in main.swift
func main() {
var argcp: Int32 = 1
glutInit(&argcp, argv!) // EXC_BAD_ACCESS
glutInitDisplayMode(UInt32(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH));
glutCreateWindow("my project")
glutDisplayFunc(display)
initOpenGL()
glutMainLoop()
but with that I get Thread 1: EXC_BAD_ACCESS (code=1, address=0x74) at the line with glutInit().
How can I initialize glut properly? How can I get an UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>! so that it works?
The reason the right code in C char * argv[1] = {"t"}; does not work is because Swift imports fixed size C-array as a tuple, not a pointer to the first element.
But your char ** argv[1] = {"t"}; is completely wrong. Each Element of argv needs to be char **, but you assign char * ("t"). Xcode must have shown you a warning at first build:
warning: incompatible pointer types initializing 'char **' with an expression of type 'char [2]'
You should better take incompatible pointer types warning as error, unless you know what you are doing completely.
Generally, you should better not write some codes generating actual code/data like char * argv[1] = {"t"}; in a header file.
You can try it with Swift code.
As you know, when you want to pass a pointer to single element T, you declare a var of type T and pass &varName to the function you call.
As argcp in your code.
As well, when you want to pass a pointer to multiple element T, you declare a var of type [T] (Array<T>) and pass &arrName to the function you call.
(Ignoring immutable case to simplify.)
The parameter argv matches this case, where T == UnsafeMutablePointer<Int8>?.
So declare a var of type [UnsafeMutablePointer<Int8>?].
func main() {
var argc: Int32 = 1
var argv: [UnsafeMutablePointer<Int8>?] = [
strdup("t")
]
defer { argv.forEach{free($0)} }
glutInit(&argc, &argv)
//...
}
But I wonder if you really want to pass something to glutInit().
You can try something like this:
func main() {
var argc: Int32 = 0 //<- 0
glutInit(&argc, nil)
//...
}
I'm not sure if freeglut accept this, but you can find some articles on the web saying that this works in some implementation of Glut.

SWIFT: Paste Bytes from NSData to Struct?

I am writing a bluetooth packets protocol for communication between iPhone and a peripheral device. The device will send me some bits, possibly 128 or 256, and the communication protocol I am using will let me access this incoming data as some NSData variable. My question is, Can I take the bytes from NSDATA or directly use the NSData somehow to paste the bytes into a struct with entries that have predefined size? For example, in C, you would have a struct like:
struct CD2_CONFIG_TYPE {
uint8_t header; // '<' or '>' start of a new packet from device or base station
uint8_t type; // 'c' indicates device configuration packet
uint8_t devSN[8]; // device serial number
};
and lets say the data we received is an NSData object that has 8 + 8 + 84 bits, which are the header, type, and devSN, respectively. So if I take these bits (or bytes) and make them a pointer using something like:
// swift code
fun dataBYtesToPointer(incomingDataPacket: NSData){
var packetBytes = UnsafePointer<UInt8>(incomingDataPacket.bytes)
}
Is there any way to copy the pointer to the struct and have the struct variables header, type, and devSN populated properly based on the size allocated to them? In C you would use memcopy and copy that pointer to a struct. So I don't get how to make the struct with predefined sizes like in the example, and I don't know how to fill it with a pointer. Any help would be appreciated.
You can think of a C method as shown below:
struct CD2_CONFIG_TYPE* commandFromData(uint8_t * data){
struct CD2_CONFIG_TYPE* command = (struct CD2_CONFIG_TYPE*)malloc(sizeof(struct CD2_CONFIG_TYPE));
command->type = data[0];
command->header = data[1];
memcpy(command->devSN, &data[2], 8* sizeof(uint8_t));
return command;
}
You can export this function signature in a .h file and import it into the bridging header so that Swift code can access this.
In your swift code, you can call this as:
let bytes = UnsafePointer<UInt8>(incomingDataPacket.bytes)
commandFromData(bytes)

Swift: How to use sizeof?

In order to integrate with C API's while using Swift, I need to use the sizeof function. In C, this was easy. In Swift, I am in a labyrinth of type errors.
I have this code:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt)
The second line has the error "'NSNumber' is not a subtype of 'T.Type'". Why is this and how do I fix it?
Updated for Swift 3
Be careful that MemoryLayout<T>.size means something different than sizeof in C/Obj-C. You can read this old thread https://devforums.apple.com/message/1086617#1086617
Swift uses an generic type to make it explicit that the number is known at compile time.
To summarize, MemoryLayout<Type>.size is the space required for a single instance while MemoryLayout<Type>.stride is the distance between successive elements in a contiguous array. MemoryLayout<Type>.stride in Swift is the same as sizeof(type) in C/Obj-C.
To give a more concrete example:
struct Foo {
let x: Int
let y: Bool
}
MemoryLayout<Int>.size // returns 8 on 64-bit
MemoryLayout<Bool>.size // returns 1
MemoryLayout<Foo>.size // returns 9
MemoryLayout<Foo>.stride // returns 16 because of alignment requirements
MemoryLayout<Foo>.alignment // returns 8, addresses must be multiples of 8
Use sizeof as follows:
let size = sizeof(Int)
sizeof uses the type as the parameter.
If you want the size of the anInt variable you can pass the dynamicType field to sizeof.
Like so:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt.dynamicType)
Or more simply (pointed out by user102008):
var anInt: Int = 5
var anIntSize: Int = sizeofValue(anInt)
Swift 3 now has MemoryLayout.size(ofValue:) which can look up the size dynamically.
Using a generic function that in turn uses MemoryLayout<Type> will have unexpected results if you e.g. pass it a reference of protocol type. This is because — as far as I know — the compiler then has all the type information it needs to fill in the values at compile time, which is not apparent when looking at the function call. You would then get the size of the protocol, not the current value.
In Xcode 8 with Swift 3 beta 6 there is no function sizeof (). But if you want, you can define one for your needs. This new sizeof function works as expected with an array. This was not possible with the old builtin sizeof function.
let bb: UInt8 = 1
let dd: Double = 1.23456
func sizeof <T> (_ : T.Type) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ : T) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ value : [T]) -> Int
{
return (MemoryLayout<T>.size * value.count)
}
sizeof(UInt8.self) // 1
sizeof(Bool.self) // 1
sizeof(Double.self) // 8
sizeof(dd) // 8
sizeof(bb) // 1
var testArray: [Int32] = [1,2,3,4]
var arrayLength = sizeof(testArray) // 16
You need all versions of the sizeof function, to get the size of a variable and to get the correct size of a data-type and of an array.
If you only define the second function, then sizeof(UInt8.self) and sizeof(Bool.self) will result in "8". If you only define the first two functions, then sizeof(testArray) will result in "8".
Swift 4
From Xcode 9 onwards there is now a property called .bitWidth, this provides another way of writing sizeof: functions for instances and integer types:
func sizeof<T:FixedWidthInteger>(_ int:T) -> Int {
return int.bitWidth/UInt8.bitWidth
}
func sizeof<T:FixedWidthInteger>(_ intType:T.Type) -> Int {
return intType.bitWidth/UInt8.bitWidth
}
sizeof(UInt16.self) // 2
sizeof(20) // 8
But it would make more sense for consistency to replace sizeof: with .byteWidth:
extension FixedWidthInteger {
var byteWidth:Int {
return self.bitWidth/UInt8.bitWidth
}
static var byteWidth:Int {
return Self.bitWidth/UInt8.bitWidth
}
}
1.byteWidth // 8
UInt32.byteWidth // 4
It is easy to see why sizeof: is thought ambiguous but I'm not sure that burying it in MemoryLayout was the right thing to do. See the reasoning behind the shifting of sizeof: to MemoryLayout here.