Storing iOS UI Component internal Usage Flags - iphone

Most of the Apples implementation, I could see they use a struct to store the flag bit, why they are doing like that?. Why can't we use the BOOL to handle this instead?.
See the Apple's sample code below from tableview,
struct {
unsigned int delegateheightForHeaderInSection:1;
unsigned int dataSourceCellForRow:1;
unsigned int delegateHeightForRow:1;
unsigned int style:1;
} _tableFlags;
and internally they may be using them something similar to this,
_tableFlags.delegateheightForHeaderInSection = [delegate respondsToSelector:#selector(tableView:heightForHeaderInSection:)];
use the "_tableFlags.delegateheightForHeaderInSection" everywhere to check the whether the user has implemented this delegate method.
So instead ofhaving the struct to store the flag, can't we implement like below.
BOOL delegateheightForHeaderInSection;
use it like this,
delegateheightForHeaderInSection = [delegate respondsToSelector:#selector(tableView:heightForHeaderInSection:)];
What difference does this two approaches have?.

unsigned int delegateheightForHeaderInSection:1;
defines a bit field of length 1 in the structure (see e.g. http://en.wikipedia.org/wiki/Bit_field).
Bit fields can be used to save space. So in this case, the four members delegateheightForHeaderInSection, ..., style are stored in contiguous bits of one integer.
Note that no space is saved in this particular case. The size of _tableFlags is the size of unsigned int, which is 4. The size of four BOOL (aka unsigned char) members is also 4.
But for example, 32 bit fields of length 1 also take 4 bytes, whereas 32 BOOL members would take 32 bytes.

Related

Why are the hex numbers for big endian different than little endian?

#include<stdio.h>
int main()
{
typedef unsigned char *byte_pointer;
void show_bytes(byte_pointer start, size_t len)
{
int i;
for (i = 0; i < len; i++)
{
printf(" %.2x", start[i]);
printf("\n");
}
}
void show_int(int x)
{
show_bytes((byte_pointer) &x, sizeof(int));
}
void show_float(int x)
{
show_bytes((byte_pointer) &x, sizeof(float));
}
void show_pointer(int x)
{
show_bytes((byte_pointer) &x, sizeof(void *));
}
int a = 0x12345678;
byte_pointer ap = (byte_pointer) &a;
show_bytes(ap, 3);
return 0;
}
(Solutions according to the CS:APP book)
Big endian: 12 34 56
Little endian: 78 56 34
I know systems have different conventions for storage allocation but if two systems use the same convention but are different endian why are the hex values different?
Endian-ness is an issue that arises when we use more than one storage location for a value/type, which we do because somethings won't fit in a single storage location.
As soon as we use multiple storage locations for a single value that gives rise to the question of:  What part of the value will we store in each storage location?
The first byte of a two-byte item will have a lower address than the second byte, and in particular, the address of the second byte will be at +1 from the address of the lower byte.
Storing a two-byte item in two bytes of storage, do we store the most significant byte first and the least significant byte second, or vice versa?
We choose to use directly consecutive bytes for the two bytes of the two-byte item, so no matter which (endian) way we choose to store such an item, we refer to the whole two-byte item by the lower address (the address of its first byte).
We can express these storage choices with a formula, here item[0] refer to the first byte while item[1] refers to the second byte.
item[0] = value >> 8 // also value / 256
item[1] = value & 0xFF // also value % 256
value = (item[0]<<8) | item[1] // also item[0]*256 | item[1]
--vs--
item[0] = value & 0xFF // also value % 256
item[1] = value >> 8 // also value / 256
value = item[0] | (item[1]<<8) // also item[0] | item[1]*256
The first set of formulas is for big endian, and the second for little endian.
By these formulas, it doesn't matter what order we access memory as to whether item[0] first, then item[1], or vice versa, or both at the same time (common in hardware), as long as the formulas for one endian are consistently used.
If the item in question is a four-byte value, then there are 4 possible orderings(!) — though only two of them are truly sensible.
For efficiency, the hardware offers us multibyte memory access in one instruction (and with one reference, namely to the lowest address of the multibyte item), and therefore, the hardware itself needs to define and consistently use one of the two possible/reasonable orderings.
If the hardware did not offer multibyte memory access, then the ordering would be entirely up to the software program itself to define (accessing memory one byte at a time), and the program could choose big or little endian, even differently for each variable, as long as it consistently accesses the multiple bytes of memory in the same manner to reassemble the values stored there.
In a similar manner, when we define a structure of multiple items (e.g. struct point { int x; int y; }, software chooses whether x comes first or y comes first in memory ordering.  However, since programmers (and compilers) will still choose to use hardware instructions to access individual fields such as x in one go, the hardware's endian configuration remains necessary.

NSNumber, what to be aware of when using and storing different primitive types

An NSNumber can store different primitive types, like short, int, long, long long, float, double
But does the size change, when i do
#(long long)
in comparison to
#(int)
By modeling the CoreData model, i use Integer16, Integer32, Integer64, but does it have an outcome on the size of the database, as all is NSNumber?
To a CoreData property, which has in model defined Integer16
long long tmp = 83324;
NSNumber * numberStoringLongLong = #(tmp);
cdEntity.propertyInteger16 = numberStoringLongLong;
long long tmp2 = [cdEntity.propertyInteger16 longLongValue];
Would propertyInteger16 behave right? Would tmp2 be valid?
Your first example will not work as intended. Even if NSNumber can store short, int, long and long long, Core Data dynamically creates custom accessors for the properties that
depend on how you defined the type in the Core Data Model.
A quick test with "Integer 16/32/64" attributes shows that behaviour:
NSNumber *n = #(0x11223344556677);
[cdEntity setValue:n forKey:#"i16"];
[cdEntity setValue:n forKey:#"i32"];
[cdEntity setValue:n forKey:#"i64"];
NSLog(#"%#", cdEntity);
Output:
<NSManagedObject: 0x7491030> (entity: Entity; id: 0x7491090 <x-coredata:///Entity/t4521AA03-435E-4683-9EAF-ED6EED5A5E6A2> ; data: {
i16 = 26231;
i32 = 1146447479;
i64 = 4822678189205111;
})
As you can see, storing an integer attribute that does not fit into the declared size
of the property does (silently) truncate the value.
So in your example, storing 83324 = 0x1457C in an Integer 16 attribute will
truncate that value to 17788 = 0x457C, and that is what you will get back, even if you
use longLongValue.
It shouldn't make a difference. As I understand it, when you allocate an NSNumber, it reserves a specific amount of memory as required in order to represent itself. When you make changes to the value of an NSNumber it should still be taking up the same amount of size in the database regardless of what actual value it's holding (same way an int is still an int whether it's set to 1 or 2147483647).
I'm too lazy to check, but if you're in xcode under openDeveloperTools>instruments there's an allocations tool and a leaks tool. You could run a for-loop and intentionally leak #(int) values, then #(longlong) values and see if there's a difference in how fast it consumes heap.

Obj-C Read Little Endian From Binary File? (xcode)

I'm looking for a way to get an int value from a binary file. So lets say i have this file "myfile.dat" where from my PC i stored a lot of stuff...
Now i need to read that file from my IPhone and show the data...
on the "myfile.dat" i have this (binary and all ints are little endian):
0-12: A signature string
13-16: An int number (note that length = 4)
So using NSData i know i can read from pos 13 to pos 16 and get those bytes... i can get the first 0-12 string correctly, but i cannot read pos 13-16 and convert it to an int value in Obj-C.... ;(
I have something like:
unsigned char bytes[length];
[_data getBytes:bytes range:NSMakeRange(offset, length)];
int32_t elem = OSReadLittleInt32(bytes, 0);
Now, im a newbie when it comes to Obj-C and C/C++... all my life i have been working with C# (sad)...
Can anyone help? THANKS IN ADVANCE!
Give this a try:
unsigned long bytes;
[_data getBytes: &bytes length: sizeof(unsigned long)];
NSLog(#"%i", NSSwapLittleLongToHost(bytes));

int and ++new goes up by 2 every time

Just a silly question:
I have a simple counter, but it seems that it gives the double value of what I expect.
short int *new = 0;
++new;
NSLog(#"Items: %hi", new);
And this returns:
Items: 2
Relatively new to Cocoa, and still working out the little details as is clear form above...
You don't have an integer variable, you have a pointer to an integer variable (a short integer, to be specific). It increments by 2 because short integers are two bytes long. A pointer variable holds a memory address of another value. Incrementing a pointer means, "make this pointer point to the next thing in memory", where "thing" is the type of value the pointer was declared to point at. A pointer to double would increment by 8 each time.
The "*" in the declaration makes it a pointer. If you just want an int, you'd just write
short int new = 0;
++new;
Aah, when you increment a pointer, in increments it by the size of the object it holds. You're looking at an address, not a number.
do this, and see:
short int *new = 0;
NSLog(#"Items now: %hi", new);
++new;
NSLog(#"Items then: %hi", new);
Because the way you define new is as a pointer to an integer, *new. You set the memory location to contain a short int, which is a 16-bit integer, so it takes up two bytes in memory. So increasing that on the second line means increasing the memory location by 2.
I don't think you intend to deal with memory locations. It's kind of odd to define an integer and also control its location in memory, unless in specific situations. Code that would do what you want is:
short int new = 0;
++new;
NSLog(#"Items: %hi", new);

What's the largest variable value in iphone?

I need to assign 2,554,416,000 to a variable. What would be the primitive to use, and what would be the object representation class to use? Thanks.
Chuck is right, but in answer to the "object representation", you want NSNumber used with the unsignedInt methods.
NSNumber *myNum = [NSNumber numberWithUnsignedInt:2554416000];
NSUInteger myInt = [myNum unsignedIntValue];
2,554,416,000 = 0x9841,4B80 ≤ 0xFFFF,FFFF (UINT_MAX), so uint32_t (unsigned int) or int64_t (long long).
A signed int32_t (int) cannot represent this because 0x9841,4B80 > 0x7FFF,FFFF (INT_MAX). Storing it in an int will make it negative.
This can be represented by a 32-bit unsigned integer (UINT_MAX is about 4 billion). That's actually what NSUInteger is on the iPhone, but if you want to be very specific about the bit width, you could specify a uint32_t.
You could store it in a regular int scaled down by 1000 if you wanted, if this represented a score that could never have the bottom 3 digits hold any info or something similiar. This would be a way to save a few bits and possibly an entire extra int of space, if that matters.