NSCoding and integer arrays - iphone

How do you use NSCoding to code (and decode) an array of of ten values of primitive type int? Encode each integer individually (in a for-loop). But what if my array held one million integers? Is there a more satisfying alternative to using a for-loop here?
Edit (after first answer): And decode? (#Justin: I'll then tick your answer.)

If performance is your concern here: CFData/NSData is NSCoding compliant, so just wrap your serialized representation of the array as NSCFData.
edit to detail encoding/decoding:
your array of ints will need to to be converted to a common endian format (depending on the machine's endianness) - e.g. always store it as little or big endian. during encoding, convert it to an array of integers in the specified endianness, which is passed to the NSData object. then pass the NSData representation to the NSCoder instance. at decode, you'll receive an NSData object for the key, you conditionally convert it to the native endianness of the machine when decoding it. one set of byte swapping routines available for OS X and iOS begin with OSSwap*.
alternatively, see -[NSCoder encodeBytes:voidPtr length:numBytes forKey:key]. this routine also requires the client to swap endianness.

Related

Swift String from byte array without validating encoding?

I'm trying to work with this Argon2 implementation. I'm implementing an existing protocol, so I don't have any flexibility in design, and the protocol treats various inputs to the function as byte sequences. However, that implementation treats inputs as Strings. Is there any encoding that I can use which will allow me to convert an arbitrary byte sequence to a String without any validity constraints - that is, such that all possible byte sequences will convert without error?

Transmitting floating-point numbers over a TLM port from SystemVerilog to SystemC

I implemented a specific filter in C/C++, "encapsulated" in a SystemC-Module. I want to use this filter in my actual verification environment (VE), which is based on SystemVerilog. To transfer data from and to the filter, I want to implement a TLM-connection. For TLM, there is something called a "generic payload", basically defining what can be transmitted via TLM, which is a byte-array.
Because of this, I need to convert the data samples in the VE from datatype real to a byte-array. What I tried to do is create a union-type, such that I can store a real-value and read a byte-array.
typedef union packed {
real value;
byte unsigned array[8];
} real_u;
However, I get the following error message.
real value;
|
ncvlog: *E,SVBPSE (Initiator.sv,7|11): The data type of a packed struct/union member must be a SystemVerilog integral type.
byte unsigned array[8];
|
ncvlog: *E,SVBPSE (Initiator.sv,8|20): The data type of a packed struct/union member must be a SystemVerilog integral type.
How could I resolve that issue? Are there other convenient ways to convert floating-point numbers to byte-arrays in SV/C++?
packed unions and structs might only contain packed members. So, in your case, both, real and byte unsigned array[8] are unpacked. Potentially you can use unpacked unions to do so, but not every vendor implements those.
Moreover, byte size of 'real' is not defined in the standard, therefore, your union most likely will not work at all. However, system verilog provides a set of functions to convert real to certain sized variables. In your case, $realtobits which returns 64 bits, will probably work.
So, i suggest you just pass real value after conversion to bits:
bit[63:0] realBits = $realtobits(value);

In Swift, how to get estimate of String length in constant time?

In Swift 3, you can count the characters in a String with:
str.characters.count
I need to do this frequently, and that line above looks like it could be O(N). Is there a way to get a string length, or a length of something — maybe the underlying unicode buffer — with an operation that is guaranteed to not have to walk the entire string? Maybe:
str.utf16.count
I ask because I'm checking the length of some text every time the user types a character, to limit the size of a UITextView. The call doesn't need to be an exact count of the glyphs, like characters.count.
This is a good question. The answer is... complicated. Converting from UTF-8 to UTF-16, or vice-versa, or converting to or from some other encoding, will all require examining the string, since the characters can be made up of more than one code unit. So if you want to get the count in constant time, it's going to come down to what the internal representation is. If the string is using UTF-16 internally, then it's a reasonable assumption that string.utf16.count would be in constant time, but if the internal representation is UTF-8 or something else, then the string will need to be analyzed to determine what the length in UTF-16 would be. So what's String using internally? Well:
https://github.com/apple/swift/blob/master/stdlib/public/core/StringCore.swift
/// The core implementation of a highly-optimizable String that
/// can store both ASCII and UTF-16, and can wrap native Swift
/// _StringBuffer or NSString instances.
This is discouraging. The internal representation could be ASCII or UTF-16, or it could be wrapping a Foundation NSString. Hrm. We do know that NSString uses UTF-16 internally, since this is actually documented, so that's good. So the main outlier here is when the string stores ASCII. The saving grace is that since the first 128 Unicode code points have the same values as the ASCII character set, any ASCII character 0xXX should correspond to the UTF-16 character 0x00XX, so the UTF-16 length should simply be the ASCII length times two, and thus calculable in constant time. Is this the case in the implementation? Let's look.
In the UTF16View source, there is no implementation of count. It appears that count is inherited from Collection's implementation, which is implemented via distance():
public var count: IndexDistance {
return distance(from: startIndex, to: endIndex)
}
UTF16View's implementation of distance() looks like this:
public func distance(from start: Index, to end: Index) -> IndexDistance {
// FIXME: swift-3-indexing-model: range check start and end?
return start.encodedOffset.distance(to: end.encodedOffset)
}
And in the String.Index source, encodedOffset looks like this:
public var encodedOffset : Int {
return Int(_compoundOffset >> _Self._strideBits)
}
where _compoundOffset appears to be a simple 64-bit integer:
internal var _compoundOffset : UInt64
and _strideBits appears to be a straight integer as well:
internal static var _strideBits : Int { return 2 }
So it... looks... like you should get constant time from string.utf16.count, since unless I'm making a mistake somewhere, you're just bit-shifting a couple of integers and then comparing the results (I'd probably still run some tests to be sure). The caveat is, of course, that this isn't documented, and thus could change in the future—particularly since the documentation for String does claim that it needs to iterate through the string:
Unlike with isEmpty, calculating a view’s count property requires iterating through the elements of the string.
With all that said, you're using a UITextView, which is implemented in Objective-C via NSAttributedString. If you're willing to incur the Objective-C message-passing overhead (which, let's be honest, is probably occurring under the scenes anyway to generate the String), you can just call its length property, which, since NSAttributedString is built on top of NSString, which does guarantee that it uses UTF-16 internally, is almost certain to be in constant time.

Serialize [Bit] to NSData in Swift

I'm implementing Haffman coding in Swift and to get benefit from this coding I need to serialize my one-zero sequences as effective as possible. Usual approach in such situations is to use bit arrays. Swift contains Bit type and there is no problem to convert sequences into [Bit], but I haven't found any standard solution to get NSData from this array. The only way I see is to create [Int8] (which could be serialized) and fill all bits manually using bitwise operators, in the worst case I will lose 7 bits in the last element.
So if there is any standard (ready-to-use) solution for [Bit] serialization?
The simple answer is NO.
Bit in Swift is implemented as public enum Bit : Int, Comparable, RandomAccessIndexType, _Reflectable { ... }. I don't see any advantage to use Bit type against the Int, except the well defined level of abstraction. The minimal NSData instance use at least one byte (in reality the amount of memory usage depends on the underlying processor capabilities. Serialization is also just an abstraction, you can serialize your [Bit] as sequence of words "Bit with binary value One", "Bit with binary value Zero", .... To save your bit sequence to NSData and to be able to reconstruct it (unserialize it), you still need to make some kind of 'binary protocol'. At least you need to save the numbers of bits as a part of your data. If you use Huffman coding, you need to save your symbol table too ...

How do you use the different Number Types in Objective C

So I am trying to do a few things with numbers in Objective C and realize there is a plethora of options, and i am just bewildered as to which type to use for my app.
so here are the types.
NSNumber (which is a class)
NSDecmial (which is a struct)
NSDecimalNumber (which is a class)
float/double (which are primitive types)
so essentially what i need to do is take an NSString, which is representing decimal based hours. (10.4 would be 10 hours and (4/10)*60 minutes) and convert it into:
a string representation D H:M (this needs division, multiplication and basic arithmatic)
a Number type to store for easy calculations latter (will mostly be converting between NSTimeIntervals and doing subtractions)
Oh and i need to be able to do an Absolute value as well on these
It appears that the hard part is actually transitioning between the types.
To me this is a very trivial problem so I"m not sure if its getting late or because objective C numerical types suck, but i could use a hand.
Use primitive types (double, CGFLoat, NSInteger) for typical arithmetic and when you need to store a number as an instance variable that's going to be used primarily for arithmetic in other places. You can use C math functions (abs(), pow(), etc) as needed. NSTimeInterval is a typedef for double, so you can interchange the two.
Use NSNumber when you need to store a number as an object, for example if you're creating an NSArray of numbers. Some parts of Cocoa like Core Data or key value coding deal more with NSNumber than primitive types, so you may find yourself using NSNumber more then usual in those situations. For example, if you write [timeKeepersArray valueForKeyPath:#"sum.seconds"] you'll get back an NSNumber, so you may find it easier just to keep that variable instead of converting it to a primitive.
Since it's a small amount of extra code to convert between NSNumber and primitive types, usually your application will end up favoring one or the other depending on what you're doing with numbers.
Oh, and NSDecmial and NSDecimalNumber? Don't worry too much about them, they only come up when you need really precise decimal operations, such as if you're storing financial data.