'int?' is not identical to 'Uint8' in swift - swift

func add(numbers: Int...) -> Int?
{
var total:Int?
for i in numbers
{
total += i // >> 'int?' is not identical to 'Uint8' in swift
}
return total
}
numbers is Int, why total can't be assign to Int?

The error you're getting might be a little deceiving. The underlying issue here is that you declared a variable total of type Int?, but never actually assigned it a value. Since total hasn't been given an integer value, it doesn't make sense to try to increment it by i.
You can fix this by initializing the total variable to zero. It's also worth noting that your total and return type do not need to be optionals here, since you're taking a variable number of non-optional Ints as input, meaning that your inputs will always have a total when added together.
If you're dead set on keeping optionals involved here, the following code will work.
func add(numbers: Int...) -> Int? {
var total: Int? = 0
for i in numbers {
total! += i
}
return total
}
Notice that the variable total is being forcibly unwrapped on incrementation. This will crash if total is ever nil (i.e. not given an initial value). But this really isn't necessary. As I explained above, there is no need to use optionals here at all. Instead, I recommend implementing the function like this.
func add(numbers: Int...) -> Int {
var total = 0
for i in numbers {
total += i
}
return total
}
If you're interested in alternatives to your function that are perhaps more Swifty, you can rewrite your entire function like this:
func add(numbers: Int...) -> Int {
return reduce(numbers, 0, +)
}

Your code won't work anyway. You try to add a number to a variable you haven't initialized yet. So the return will always be nil So change that:
var total:Int?
to that:
var total:Int? = 0
But you could also remove the ? because it's not necessary, because you will take non-optional parameters.
var total:Int = 0

Total is not Int is Optional(Int), you have to unwrap for it to become Int
You can do this in 2 ways, faster and safer:
Faster: return total!
Safer:
if let total = total {return total}
Hope this helps, have fun with Swift!

Related

i cannot understand a method behaviour

func mymethod(getno:Int)->(myval :Int,defaults :Int){
return (getno,8)
}
print(mymethod(getno: 2))
the output for the above program is
my val is 2 default is 8
ok, i can understand above program
but the below method is doubtful for me
func makeIncrementer() -> ((Int) -> Int) {
func addOne(number: Int) -> Int {
return 1 + number
}
return addOne
}
var increment = makeIncrementer()
increment(7)
i can't understand this line (var increment=makeIncrementer()) what's happening there ?
Is there somebody who can explain me this briefly?
makeIncrementer returns a function that accepts an Int and return an Int , so this
var increment = makeIncrementer()
makes increment variable to be equal to
func addOne(number: Int) -> Int {
return 1 + number
}
so increment(7) = addOne(7)
You may read Functions chapter in Swift book
You are not alone. I found it hard to wrap my head around this when I first started as well :)
What you are seeing is "a function returning another function". Think of functions as black boxes that take stuff in and spit stuff out. Now imagine passing these around as if they were values like 10 or "Hello".
makeIncrementer returns a function. Which function? This:
func addOne(number: Int) -> Int {
return 1 + number
}
See the line return addOne? That is not calling the function, but returning it!
So when you assign the return value of makeIncrementer to increment:
var increment = makeIncrementer()
increment now has the value of addOne. Calling increment would be the same as calling addOne. The above line is syntactically no different from:
var myString = getMyString()
The difference is that you are working with functions instead of strings.
Now you understand why increment(7) is 8.
If you want to burn your brain more, try looking up currying functions.

Swift 3: How to check the type of a generic array

I declare a generic array
fileprivate var array: [T?]
I have a method average(), which will calculate average if 'T' is Int or Float; otherwise returns 0
public func average() -> Float {
var mean = 0
if T is Int or Float {
for index in (0..<array.count-1){
mean = mean+array[index]
}
mean = mean/array.count
}
return mean;
}
Question: How will I check if array is holding Int/Float (if T is Int or Float, in above code)
This is a tool for protocols. The useful protocols for your problem are FloatingPoint to handle floating point types (like Float) and Integer to handle signed integer types (like Int). These have slightly different implementations, so it's best to write each one separately. Doing this will ensure that this method is only available for appropriate types of T (rather than all possible types, and just returning 0 in those cases).
extension MyStruct where T: FloatingPoint {
func average() -> T {
let sum = array.flatMap{$0}.reduce(0, +)
let count = T(array.count)
return sum.divided(by: count)
}
}
extension MyStruct where T: Integer {
func average() -> Float {
let sum = array.flatMap{$0}.reduce(0, +)
let count = array.count
return Float(sum.toIntMax()) / Float(count.toIntMax())
}
}
EDIT: Following up a bit more on Caleb's comments below, you may be tempted to think it's ok to just convert integers into floats to generate their average. But this is not safe in general without careful consideration of your ranges. For example, consider the average of [Int.min, Int.max]. That's [-9223372036854775808, 9223372036854775807]. The average of that should be -0.5, and that's what's returned by my example above. However, if you convert everything to floats in order to sum it, you'll get 0, because Float cannot express Int.max precisely. I've seen this bite people in live code when they do not remember that for very large floats, x == x+1.
Float(Int.max) == Float(Int.max - 1) // true
You can use the type method introduced in Swift 3 in the following way:
let type = type(of: array)
print("type: \(type)") // if T is String, you will see Array<Optional<String>> here
You will need to iterate over your array and use "if let" to unwrap the type of the values in the array. If they are ints handle them one way, if they are floats handle them another way.
//while iterating through your array check your elements to see if they are floats or ints.
if let stringArray = T as? Int {
// obj is a string array. Do something with stringArray
}
else {
// obj is not a string array
}
Here's a high level description:
... inside a loop which allows indexing
if let ex = array[index] as? Int {
your code
continue // go around the loop again, you're all done here
}
if let ex = array[index] as? Float {
// other code
continue // go around the loop again, you're all done here
}
// if you got here it isn't either of them
// code to handle that
... end of inside the loop
I can explain further if that isn't clear enough.
This is probably the simplest way to do it:
var average: Float {
let total = array.reduce(0.0) { (runningTotal, item) -> Float in
if let itemAsFloat = item as? Float {
return runningTotal + itemAsFloat
}
else if let itemAsInt = item as? Int {
return runningTotal + Float(itemAsInt)
}
else {
return runningTotal
}
}
return total / Float(array.count)
}
Obviously if you want it can be a function and depending on how you're wanting to use it you may need to tweak it.
*Note that it's possible to have both Int and Float in an array of T. e.g. if T is Any.

Swift Generic "Numeric" Protocol that Validates Numeric Values

Swift 3.
Ultimately my functions need to receive UInt8 data types, but I'm never sure if the arguments I will receive from callers will be Int, Int64, UInt, Float, etc. I know they will be numeric types, I just don't know which flavor.
I could do:
func foo(value: Int) { }
func foo(value: Float) {}
func foo(value: UInt) {}
But that's crazy. So I thought I could do something like create a protocol
protocol ValidEncodable {
}
And then pass in types that conform:
func foo(value: ValidEncodable) { }
And then in that function I could get my values into the correct format
func foo(value: ValidEncoable) -> UInt8 {
let correctedValue = min(max(floor(value), 0), 100)
return UInt8(correctedValue)
}
I'm really struggling to figure out
1) How to create this ValidEncodable protocol that contains all the numeric types
2) And how to do things like floor(value) when the value I get is an Int without iterating over every possible numeric type (i.e. (floor(x) is only available on floating-point types)
Ultimately I need the values to be UInt8 in the range of 0-100. The whole reason for this madness is that I'm parsing XML files to my own internal data structures and I want to bake in some validation to my values.
This can be done without a protocol, and by making use of compiler checks, which greatly reduces the changes of bugs.
My recommendation is to use a partial function - i.e. a function that instead of taking an int, it takes an already validated value. Check this article for a more in-depth description of why partial functions are great.
You can build a Int0to100 struct, which has either a failable or throwable initializer (depending on taste):
struct Int0to100 {
let value: UInt8
init?(_ anInt: Int) {
guard anInt >= 0 && anInt <= 100 else { return nil }
value = UInt8(anInt)
}
init?(_ aFloat: Float) {
let flooredValue = floor(aFloat)
guard flooredValue >= 0 && flooredValue <= 100 else { return nil }
value = UInt8(flooredValue)
}
// ... another initializers can be added the same way
}
and change foo to allow to be called with this argument instead:
func foo(value: Int0to100) {
// do your stuff here, you know for sure, at compile time,
// that the function can be called with a valid value only
}
You move to the caller the responsibility of validating the integer value, however the validation resolves to checking an optional, which is easy, and allows you to handle the scenario of an invalid number with minimal effort.
Another important aspect is that you explicitly declare the domain of the foo function, which improves the overall design of the code.
And not last, enforcements set at compile time greatly reduce the potential of having runtime issues.
If you know your incoming values will lie in 0..<256, then you can just construct a UInt8 and pass it to the function.
func foo(value: UInt8) -> UInt8 { return value }
let number = arbitraryNumber()
print(foo(UInt8(number)))
This will throw a runtime exception if the value is too large to fit in a byte, but otherwise will work. You could protect against this type of error by doing some bounds checking between the second and third lines.

How to handle initial nil value for reduce functions

I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.

Swift: How to use sizeof?

In order to integrate with C API's while using Swift, I need to use the sizeof function. In C, this was easy. In Swift, I am in a labyrinth of type errors.
I have this code:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt)
The second line has the error "'NSNumber' is not a subtype of 'T.Type'". Why is this and how do I fix it?
Updated for Swift 3
Be careful that MemoryLayout<T>.size means something different than sizeof in C/Obj-C. You can read this old thread https://devforums.apple.com/message/1086617#1086617
Swift uses an generic type to make it explicit that the number is known at compile time.
To summarize, MemoryLayout<Type>.size is the space required for a single instance while MemoryLayout<Type>.stride is the distance between successive elements in a contiguous array. MemoryLayout<Type>.stride in Swift is the same as sizeof(type) in C/Obj-C.
To give a more concrete example:
struct Foo {
let x: Int
let y: Bool
}
MemoryLayout<Int>.size // returns 8 on 64-bit
MemoryLayout<Bool>.size // returns 1
MemoryLayout<Foo>.size // returns 9
MemoryLayout<Foo>.stride // returns 16 because of alignment requirements
MemoryLayout<Foo>.alignment // returns 8, addresses must be multiples of 8
Use sizeof as follows:
let size = sizeof(Int)
sizeof uses the type as the parameter.
If you want the size of the anInt variable you can pass the dynamicType field to sizeof.
Like so:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt.dynamicType)
Or more simply (pointed out by user102008):
var anInt: Int = 5
var anIntSize: Int = sizeofValue(anInt)
Swift 3 now has MemoryLayout.size(ofValue:) which can look up the size dynamically.
Using a generic function that in turn uses MemoryLayout<Type> will have unexpected results if you e.g. pass it a reference of protocol type. This is because — as far as I know — the compiler then has all the type information it needs to fill in the values at compile time, which is not apparent when looking at the function call. You would then get the size of the protocol, not the current value.
In Xcode 8 with Swift 3 beta 6 there is no function sizeof (). But if you want, you can define one for your needs. This new sizeof function works as expected with an array. This was not possible with the old builtin sizeof function.
let bb: UInt8 = 1
let dd: Double = 1.23456
func sizeof <T> (_ : T.Type) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ : T) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ value : [T]) -> Int
{
return (MemoryLayout<T>.size * value.count)
}
sizeof(UInt8.self) // 1
sizeof(Bool.self) // 1
sizeof(Double.self) // 8
sizeof(dd) // 8
sizeof(bb) // 1
var testArray: [Int32] = [1,2,3,4]
var arrayLength = sizeof(testArray) // 16
You need all versions of the sizeof function, to get the size of a variable and to get the correct size of a data-type and of an array.
If you only define the second function, then sizeof(UInt8.self) and sizeof(Bool.self) will result in "8". If you only define the first two functions, then sizeof(testArray) will result in "8".
Swift 4
From Xcode 9 onwards there is now a property called .bitWidth, this provides another way of writing sizeof: functions for instances and integer types:
func sizeof<T:FixedWidthInteger>(_ int:T) -> Int {
return int.bitWidth/UInt8.bitWidth
}
func sizeof<T:FixedWidthInteger>(_ intType:T.Type) -> Int {
return intType.bitWidth/UInt8.bitWidth
}
sizeof(UInt16.self) // 2
sizeof(20) // 8
But it would make more sense for consistency to replace sizeof: with .byteWidth:
extension FixedWidthInteger {
var byteWidth:Int {
return self.bitWidth/UInt8.bitWidth
}
static var byteWidth:Int {
return Self.bitWidth/UInt8.bitWidth
}
}
1.byteWidth // 8
UInt32.byteWidth // 4
It is easy to see why sizeof: is thought ambiguous but I'm not sure that burying it in MemoryLayout was the right thing to do. See the reasoning behind the shifting of sizeof: to MemoryLayout here.