Is it possible to create a "Positive Number" type in Swift? - swift

Sorry if this is a stupid question but I'm wondering if there's a way in Swift to create a type that exclusively holds numbers that are strictly greater than zero and where the "positiveness" of the values is enforced at compile time.
For example, can I create somehow write code like
func divide(x: PositiveNumber, y: PositiveNumber){
return x / y
}
such that
divide(1, 3)
works but
divide(1, 0)
won't compile?
The closest thing I could come up with was a struct with only one fallible initializer such that the type has either a positive value or is nil:
struct PositiveNumber {
let value: Float
init?(value: Float){
if value > 0 {
self.value = value
} else {
return nil
}
}
}
func / (left: PositiveNumber, right: PositiveNumber) -> Float {
return left.value / right.value
}
func divide(x: PositiveNumber?, y: PositiveNumber?) -> Float? {
if let x = x, y = y {
return x / y
}
return nil
}
let x1 = PositiveNumber(value: 1)
let y1 = PositiveNumber(value: 3)
let x2 = PositiveNumber(value: -1)
let y2 = PositiveNumber(value: 0)
divide(x1, y: y1)! // .333
divide(x2, y: y2)! // runtime error
That's not terrible but we still have to deal with a lot of optional handling/unwrapping. I'm asking this question because I have many places in my code where I need to check that a value is not zero and I'm curious if there's a way to remove that code and let the compiler handle it. The struct-based solution requires pretty much the same amount of code.

Number Type build on Float
This gist is contains a Struct that conforms to pretty much everything Float conforms too. It is just a vanilla copy of Float, change it to your liking.
Have you considered a custom operator?
infix operator /+ { associativity left precedence 150 }
func /+(lhs:Float,rhs:Float) -> Float? {
guard rhs > 0 else {
return nil
}
return lhs / rhs
}
let test = 2 /+ -1 // nil
let test2 = 2 /+ 1 // 2
let test3 = 2 /+ 1 + 2 // warning
It doesn't really matter if you let it return an optional, or an enum value, or different protocols. You will have to handle the return. But this way you get compiler warnings.
Limited Number Type with just an operator to handle divisions:
You could change math altogether and create a PositiveNumber Type that returns NaN when dividing by a value less than Zero.
public struct PositiveFloat {
public var value: Float
/// Create an instance initialized to zero.
public init() {
self.value = 0
}
/// Create an instance initialized to `value`.
public init(_ value: Float) {
self.value = value
}
public init(_ value: PositiveFloat) {
self.value = value.value
}
}
extension Float {
public var positive : PositiveFloat {
return PositiveFloat(self)
}
}
public func /(lhs:Float,rhs:PositiveFloat) -> Float {
if 0 > rhs.value {
return lhs / rhs.value
} else {
return Float.NaN
}
}
public func /(lhs:PositiveFloat,rhs:PositiveFloat) -> Float {
if 0 > rhs.value {
return lhs.value / rhs.value
} else {
return Float.NaN
}
}
let testNormal : Float = 10
let testFloat : Float = -5
let test = testFloat / testNormal.positive
if test.isNaN {
// do stuff
}

Related

When declaring static variable for conformance to AdditiveArithmetic, cannot call instance member from same type

I know this sounds crazy for a 10-year-old, but because S4TF doesn't work for me, I'm building my own neural network library in Swift. (I haven't gotten that far.) I'm creating a structure that conforms to AdditiveArithmetic. It also uses Philip Turner's Differentiable, but that's unimportant.
Anyway, when defining the zero variable, I call another variable dimen, defined in the same structure. This raises an error: instance member 'dimen' cannot be used on type 'Electron<T>'
note: the structure I am creating is going to be used to create a multi-dimensional array for neural networks.
Total code (stripped down to remove unimportant bits):
public struct Electron<T> where T: ExpressibleByFloatLiteral, T: AdditiveArithmetic {
var energy: [[Int]: T] = [:]
var dimen: [Int]
public init(_ dim: [Int], with: ElectronInitializer) {
self.dimen = dim
self.energy = [:]
var curlay = [Int](repeating: 0, count: dimen.count)
curlay[curlay.count-1] = -1
while true {
var max: Int = -1
for x in 0..<curlay.count {
if curlay[curlay.count-1-x] == dimen[curlay.count-1-x]-1 {
max = curlay.count-1-x
}
else {break}
}
if max == 0 {break}
else if max != -1 {
for n in max..<curlay.count {
curlay[n] = -1
}
curlay[max-1] += 1
}
curlay[curlay.count-1] += 1
print(curlay)
energy[curlay] = { () -> T in
switch with {
case ElectronInitializer.repeating(let value):
return value as! T
case ElectronInitializer.random(let minimum, let maximum):
return Double.random(in: minimum..<maximum) as! T
}
}()
}
}
subscript(_ coordinate: Int...) -> T {
var convertList: [Int] = []
for conversion in coordinate {
convertList.append(conversion)
}
return self.energy[convertList]!
}
public mutating func setQuantum(_ replace: T, at: [Int]) {
self.energy[at]! = replace
}
}
extension Electron: AdditiveArithmetic {
public static func - (lhs: Electron<T>, rhs: Electron<T>) -> Electron<T> where T: AdditiveArithmetic, T: ExpressibleByFloatLiteral {
var output: Electron<T> = lhs
for value in lhs.energy {
output.energy[value.key] = output.energy[value.key]!-rhs.energy[value.key]!
}
return output
}
public static var zero: Electron<T> {
return Electron.init(dimen, with: ElectronInitializer.repeating(0.0))
}
static prefix func + (x: Electron) -> Electron {
return x
}
public static func + (lhs: Electron<T>, rhs: Electron<T>) -> Electron<T> where T: AdditiveArithmetic, T: ExpressibleByFloatLiteral {
var output: Electron<T> = lhs
for value in lhs.energy {
output.energy[value.key] = output.energy[value.key]!+rhs.energy[value.key]!
}
return output
}
}
public enum ElectronInitializer {
case random(Double, Double)
case repeating(Double)
}
Error:
NeuralNetwork.xcodeproj:59:30: error: instance member 'dimen' cannot be used on type 'Electron'
return Electron.init(dimen, with: ElectronInitializer.repeating(0.0))
I don't know what's happening, but thanks in advance. I'm new to Stack Overflow, so sorry if I did something wrong.
The root of the problem is that dimen is an instance property, while zero is a static property. In a static context, you don't have an instance from which to access dimen, and so the compiler gives you the error. static properties and methods are a lot like global variables and free-functions with respect to accessing instance properties and methods. You'd have to make an instance available somehow. For a static function, you could pass it in, but for a static computed property, you'd either have to store an instance in a stored static property, which isn't allowed for generics, or you'd have to store it in a global variable, which isn't good either, and would be tricky to make work for all the possible T.
There are ways to do what you need though. They all involve implementing some special behavior for a zero Electron rather than relying on access to an instance property in your static .zero implementation. I made some off-the-cuff suggestions in comments, which would work; however, I think a more elegant solution is to solve the problem by creating a custom type for energy, which would require very few changes to your existing code. Specifically you could make an Energy type nested in your Electron type:
internal struct Energy: Equatable, Sequence {
public typealias Value = T
public typealias Key = [Int]
public typealias Element = (key: Key, value: Value)
public typealias Storage = [Key: Value]
public typealias Iterator = Storage.Iterator
public typealias Keys = Storage.Keys
public typealias Values = Storage.Values
private var storage = Storage()
public var keys: Keys { storage.keys }
public var values: Values { storage.values }
public var count: Int { storage.count }
public init() { }
public subscript (key: Key) -> Value? {
get { storage.isEmpty ? .zero : storage[key] }
set { storage[key] = newValue }
}
public func makeIterator() -> Iterator {
storage.makeIterator()
}
}
The idea here is that when energy.storage is empty, it returns 0 for any key, which allows you to use it as a .zero value. I've made it internal, because energy defaults to internal, and so I've done a minimalist job of wrapping a Dictionary, mainly providing subscript operator, and making it conform to Sequence, which is all that is needed by code you provided.
The only changes needed in the rest of your code are to change the definition of energy
var energy: Energy
Then to set it in your initializer, by-passing the bulk of your init when dim is empty.
public init(_ dim: [Int], with: ElectronInitializer) {
self.dimen = dim
self.energy = Energy() // <- Initialize `energy`
// Empty dim indicates a zero electron which doesn't need the
// rest of the initialization
guard dim.count > 0 else { return }
var curlay = [Int](repeating: 0, count: dimen.count)
curlay[curlay.count-1] = -1
while true {
var max: Int = -1
for x in 0..<curlay.count {
if curlay[curlay.count-1-x] == dimen[curlay.count-1-x]-1 {
max = curlay.count-1-x
}
else {break}
}
if max == 0 {break}
else if max != -1 {
for n in max..<curlay.count {
curlay[n] = -1
}
curlay[max-1] += 1
}
curlay[curlay.count-1] += 1
print(curlay)
energy[curlay] = { () -> T in
switch with {
case ElectronInitializer.repeating(let value):
return value as! T
case ElectronInitializer.random(let minimum, let maximum):
return Double.random(in: minimum..<maximum) as! T
}
}()
}
}
And then of course, to change how you create it in your zero property
public static var zero: Electron<T> {
return Electron.init([], with: ElectronInitializer.repeating(0.0))
}
ElectronInitializer isn't actually used in this case. It's just a required parameter of your existing init. This suggests an opportunity to refactor initialization, so you could have an init() that creates a zero Electron in addition to your existing init(dim:with:)

Swift Extension and 'Element' explicit initialization

I wonder if is there any other way to explicitly initialize an Element object in a Swift extension ?
For example I would like to do this but non-nominal type 'Element' does not support explicit initialization
extension Array where Element: Numeric {
func findDuplicate() -> Int {
guard self.count > 0 else { return -1 }
let sum = self.reduce(0, +)
let expectedSum = Element((self.count - 1) * self.count / 2)
return sum - expectedSum
}
}
Of course if I remove the forced Element casting in the expectedSum assignment and let the compiler use an Int I obtain an error comparing sum (Element) and expectedSum (Int)
I can easily have my extension working with where Element == Int but of course this is no more generic in this way.
Any hints?
Conversion of an integer to a Numeric type is done with init?(exactly:). Taking Leo's suggestions into account:
extension Array where Element: Numeric {
func findDuplicate() -> Element? {
guard !isEmpty else { return nil }
let sum = self.reduce(0, +)
guard let expectedSum = Element(exactly: (count - 1) * count / 2) else { return nil }
return sum - expectedSum
}
}
On the other hand, this seems to be a programming task that
is specifically about integers, and then it might make more sense
to restrict the element type to BinaryInteger (and use Int
for the intermediate calculations to avoid overflow):
extension Array where Element: BinaryInteger {
func findDuplicate() -> Element? {
guard !isEmpty else { return nil }
let sum = Int(reduce(0, +))
let expectedSum = (count - 1) * count / 2
return Element(sum - expectedSum)
}
}
or even Element == Int.

Cannot convert type of Int to expected value [MyCustomType]

Xcode 8.3.3 is giving me this Swift 3 error on this line
values2[index] = nextValue(currentValue)
Cannot convert value of type 'Int' to expected argument type 'Card'
Here's my code:
//
// Card.swift
// match
//
// Created by quantum on 05/09/2017.
// Copyright © 2017 Quantum Productions. All rights reserved.
//
import UIKit
class Card: NSObject {
var quantity = 0
var fill = 0
var shape = 0
var color = 0
override var description : String {
return "Q" + String(quantity) + "/F" + String(fill) + "/S" + String(shape) + "/C" + String(color)
}
override init() {
super.init()
}
static func properties() -> [String] {
return ["quantity", "fill", "shape", "color"]
}
static func isMatch(cards: [Card]) -> Bool {
for property in self.properties() {
var sum = 0
for card in cards {
sum = sum + (card.value(forKey: property) as! Int)
}
if !([3, 6, 9, 7].contains(sum)) {
return false
}
}
return true
}
static func deck(_ values: [Int], _ index: Int, _ max: Int, _ acc: [Card]) -> [Card]{
let currentValue = values[index]
var values2 = values
if currentValue >= max {
if index == 0 {
return acc
}
values2[index] = 0
values2[index-1] = values2[index-1] + 1
return deck(values, index - 1, max, acc)
} else {
var acc2 = acc
let card = Card()
for (index, element) in self.properties().enumerated() {
card.setValue(values[index], forKey: element)
}
acc2.append(Card())
values2[index] = nextValue(Card())
return deck(values2, index, max, acc2)
}
}
func nextValue(_ v: Int) -> Int {
if (v == 0) {
return 1
} else if (v == 1) {
return 2
}
return 4
}
static func deck() -> [Card] {
return deck([1,1,1,1], 4, 3, [Card]())
}
}
this is inside of my Card class.
Strangely, if I try (this is wrong, I'm testing the compiler error)
values2[index] = nextValue(Card())
I get the error Cannot assign the value of type (Int) -> Int to type 'Int'.
Swift thinks my Card is an Int? I'm confused as to what's happening.
I expected to get the call nextvalue with the variable currentvalue, which should be an Int.
It's a bad error message from the compiler.
Your problem is that deck is declared static, but you're trying to call nextValue which is not declared static. This means that nextValue implicitly takes a hidden argument, self, but deck isn't providing it.
If you add static to the func nextValue declaration, it will work like you expect. (You'll get an error on the line referring to self.properties instead, but you'll be closer.)
To make this work properly, you probably want all these functions to be non-static instead. Just think about how this code gets called initially (i.e. how you get your first instance of Card).
A static method cannot call an instance method: the idea makes no sense, as there is no instance. Thus your reference to nextValue is impossible. That is why the line is problematic. How can a static method deck call an instance method nextValue?

Propagate an optional through a function (or Init) in Swift

does anyone have a (better) way to do this?
Lets say I have a optional Float
let f: Float? = 2
Now I want to cast it to a Double
let d = Double(f) //fail
This will obviously fail but is there a way to chain the optional through the function like you can with calculated variables? What I am doing now is this:
extension Float {
var double: Double { return Double(self) }
}
let d: Double? = f?.double
But I really do not like putting a cast as a calculated variable.
Another option I have considered using is this:
public func optionalize<A,B>(_ λ : #escaping (A) -> B) -> (A?) -> B? {
return { (a) in
guard let a = a else { return nil }
return λ(a)
}
}
let d: Double? = optionalize(Double.init)(f)
I realize I can guard the value of 'f' to unwrap it. However in many cases the optional value will be the parameter for a function that returns an optional. This leads to intermediate values in the guard. As seen in this example:
func foo(_ a: String?) throws -> Float {
guard
let a = a,
let intermediate = Float(a)
else { throw.something }
return intermediate
}
Here it is possible for the cast from String to Float to fail also.
At least with a calculated variable this foo function is a bit cleaner
extension String {
var float: Float? { return Float(self) }
}
func foo(_ a: String?) throws -> Float {
guard
let a = a?.float
else { throw.something }
return a
}
I do not want to rewrite optional versions of frequent inits.
Any ideas will be much appreciated. Thanks!
You can simply use Optional's map(_:) method, which will return the wrapped value with a given transform applied if it's non-nil, else it will return nil.
let f : Float? = 2
// If f is non-nil, return the result from the wrapped value passed to Double(_:),
// else return nil.
let d = f.map { Double($0) }
Which, as you point out in the comments below, can also be said as:
let d = f.map(Double.init)
This is because map(_:) expects a transformation function of type (Float) -> Double in this case, and Double's float initialiser is such a function.
If the transform also returns an optional (such as when converting an String to a Int), you can use flatMap(_:), which simply propagates a nil transform result back to the caller:
let s : String? = "3"
// If s is non-nil, return the result from the wrapped value being passed to the Int(_:)
// initialiser. If s is nil, or Int($0) returns nil, return nil.
let i = s.flatMap { Int($0) }

Swift Custom Variable Type

For working with complex numbers, this is how I've been doing it:
import Foundation
class Complex {
var real: Float
var imaginary: Float
init(re: Float, im: Float) {
self.imaginary = im
self.real = re
}
func abs() -> Float {
return sqrtf(powf(self.real, 2) + powf(self.imaginary, 2))
}
func string() -> String {
if (ceilf(self.real) == self.real) && (ceilf(self.imaginary) == self.imaginary){
return "\(Int(self.real))+\(Int(self.imaginary))i"
}
return "\(self.real)+\(self.imaginary)i"
}
func arg() -> Float {
return atan2f(self.imaginary, self.real)
}
}
var someComplex = Complex(re: 2, im: 3)
var someComplexString = someComplex.string() //"2+3i"
var someComplexAbsolute = someComplex.abs() // 3.60...
var someComplexArgument = someComplex.arg() //0.98...
Now I was wondering, if there was any way to define a custom type of variable that would let me write it as someComplex: Complex = 3i for example. Is it possible to create a new type "from the ground up"?
I'm slightly uncertain if this is what you're looking for, but based on your comment to the other answer
"Thanks, but what I meant was something like a custom type that would accept all floats and i for example",
you could create a wrapper that holds a single generic value, where the generic type of this value is type constrained to some protocol, to which you in turn extend the types you wish to be wrappable but the type. E.g., below allowing types Int, Float and String to be wrapped by the wrapper
protocol MyGenericTypes { }
extension Int: MyGenericTypes { }
extension Float: MyGenericTypes { }
extension String: MyGenericTypes { }
struct SomeWrapper<T: MyGenericTypes> {
var value: T
init(_ value: T) {
self.value = value
}
}
let myInt = 42
let myFloat: Float = 4.2
let myString = "forty-two"
let wrappedInt = SomeWrapper(myInt) // T inferred as "Int"
let wrappedFloat = SomeWrapper(myFloat) // T inferred as "Float"
let wrappedString = SomeWrapper(myString) // T ingerred as "String"
print(wrappedInt.dynamicType) // SomeWrapper<Int>
print(wrappedFloat.dynamicType) // SomeWrapper<Float>
print(wrappedString.dynamicType) // SomeWrapper<String>
You can naturally write generic functions allowing arguments for SomeWrapper<T> instances, type constraining T in the same fashion as in the struct definition above
func foo<T: MyGenericTypes>(bar: SomeWrapper<T>) -> () {
print(bar.value)
}
foo(wrappedInt) // 4.2
foo(wrappedFloat) // 42
foo(wrappedString) // forty-two