I'm back again with what is likely a simple issue, however its got me stumped.
I've written very small, very basic piece of code in an xcode playground.
My code simply iterates over a function 10 times, printing the output each time.
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
Now for my issue, It seems my code properly increments the 'start' variable.... running and printing 10 times. However it prints out a list of 0's. For some reason the 'x' variable isn't incrementing.
I've consulted the few ebooks I have for swift, aswell as the documentation, and as far as i can see my code should work.
Any ideas?
P.s. As per the documentation I have also tried ++x, to no avail.
edit
Updated, working code thanks to answers below:
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print("The variable is", x, "and doubled it is", answer)
}
while start <= 10 {
spin()
x++
start++
answer = 2 * x
}
You have just assigned 2 * x to answer at the beginning of the program, when x == 0, and the value of answer remains its initial value through out the program. That's how Value Types work in Swift as well as in almost any other languages
If you wish to always have answer to be 2 times of x, you should write like this
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
answer = 2 * x
}
And thanks to Leo Dabus's answer, you may also define a Computed Property to caculate the value of 2 * x each time you try to get the value of answer. In this way, answer becomes readonly and you cannot assign other values to it. And each time you try to get the value of answer, it performs the 2 * x calculation.
var start = 0
var x = 0
var answer: Int {
return 2 * x
}
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
What you need is a read only computed property. Try like this:
var answer: Int { return 2 * x }
Related
I am trying to implement this code which I got from an apple WWDC video. However the video is from 2016 and I think the syntax has changed. How do I call sizeof(Float)? This produces an error.
func render(buffer:AudioBuffer){
let nFrames = Int(buffer.mDataByteSize) / sizeof(Float)
var ptr = UnsafeMutableRawPointer(buffer.mData)
var j = self.counter
let cycleLength = self.sampleRate / self.frequency
let halfCycleLength = cycleLength / 2
let amp = self.amplitude, minusAmp = -amp
for _ in 0..<nFrames{
if j < halfCycleLength{
ptr.pointee = amp
} else {
ptr.pointee = minusAmp
}
ptr = ptr.successor()
j += 1.0
if j > cycleLength {
}
}
self.counter = j
}
The sizeof() function is no longer supported in Swift.
As Leo Dabus said in his comment, you want MemoryLayout<Type>.size, or in your case, MemoryLayout<Float>.size.
Note that tells you the abstract size of an item of that type. However, due to alignment, you should not assume that structs containing different types of items will be the sums of the sizes of the other elements. Also, you need to consider the device it's running on. On a 64 bit device, Int is 8 bytes. On a 32 bit device, it's 4 bytes.
See the article on MemoryLayout at SwiftDoc.org for more information.
I am trying to solve the task
Using a standard for-in loop add all odd numbers less than or equal to 100 to the oddNumbers array
I tried the following:
var oddNumbers = [Int]()
var numbt = 0
for newNumt in 0..<100 {
var newNumt = numbt + 1; numbt += 2; oddNumbers.append(newNumt)
}
print(oddNumbers)
This results in:
1,3,5,7,9,...199
My question is: Why does it print numbers above 100 although I specify the range between 0 and <100?
You're doing a mistake:
for newNumt in 0..<100 {
var newNumt = numbt + 1; numbt += 2; oddNumbers.append(newNumt)
}
The variable newNumt defined inside the loop does not affect the variable newNumt declared in the for statement. So the for loop prints out the first 100 odd numbers, not the odd numbers between 0 and 100.
If you need to use a for loop:
var odds = [Int]()
for number in 0...100 where number % 2 == 1 {
odds.append(number)
}
Alternatively:
let odds = (0...100).filter { $0 % 2 == 1 }
will filter the odd numbers from an array with items from 0 to 100. For an even better implementation use the stride operator:
let odds = Array(stride(from: 1, to: 100, by: 2))
If you want all the odd numbers between 0 and 100 you can write
let oddNums = (0...100).filter { $0 % 2 == 1 }
or
let oddNums = Array(stride(from: 1, to: 100, by: 2))
Why does it print numbers above 100 although I specify the range between 0 and <100?
Look again at your code:
for newNumt in 0..<100 {
var newNumt = numbt + 1; numbt += 2; oddNumbers.append(newNumt)
}
The newNumt used inside the loop is different from the loop variable; the var newNumt declares a new variable whose scope is the body of the loop, so it gets created and destroyed each time through the loop. Meanwhile, numbt is declared outside the loop, so it keeps being incremented by 2 each time through the loop.
I see that this is an old question, but none of the answers specifically address looping over odd numbers, so I'll add another. The stride() function that Luca Angeletti pointed to is the right way to go, but you can use it directly in a for loop like this:
for oddNumber in stride(from:1, to:100, by:2) {
// your code here
}
stride(from:,to:,by:) creates a list of any strideable type up to but not including the from: parameter, in increments of the by: parameter, so in this case oddNumber starts at 1 and includes 3, 5, 7, 9...99. If you want to include the upper limit, there's a stride(from:,through:,by:) form where the through: parameter is included.
If you want all the odd numbers between 0 and 100 you can write
for i in 1...100 {
if i % 2 == 1 {
continue
}
print(i - 1)
}
For Swift 4.2
extension Collection {
func everyOther(_ body: (Element) -> Void) {
let start = self.startIndex
let end = self.endIndex
var iter = start
while iter != end {
body(self[iter])
let next = index(after: iter)
if next == end { break }
iter = index(after: next)
}
}
}
And then you can use it like this:
class OddsEvent: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
(1...900000).everyOther{ print($0) } //Even
(0...100000).everyOther{ print($0) } //Odds
}
}
This is more efficient than:
let oddNums = (0...100).filter { $0 % 2 == 1 } or
let oddNums = Array(stride(from: 1, to: 100, by: 2))
because supports larger Collections
Source: https://developer.apple.com/videos/play/wwdc2018/229/
I wrote a tiny Swift programme to add a number to the previous number until it reached infinity. However, infinity is reached BEFORE the Double Maximum is reached.
Double limit is 1.79769313486232e+308
Distance to limit is 4.90703911098917e+307
Yet, 8.07763763215622e+307 + 1.3069892237634e+308 reached infinity
Why is this? (I answered this below.)
Run it for yourselves:
import Foundation
import Darwin
var current: Double = 1
var previous: Double = 0
var register: Double = 0
var infinity = Double.infinity
var isInfinite = infinity.isInfinite
var n = 1
while current < infinity {
register = current
current = previous + register
print("\(n): \(current)")
guard current != infinity else { break }
previous = register
n += 1
}
print("\n")
print("Double limit is \(DBL_MAX)")
print("Distance to limit is \(DBL_MAX - register)")
print("Yet, \(previous) + \(register) reached infinity")
After adding:
print((DBL_MAX - register) - previous)
to the end of my code, I realised my error is not fully grasping e+ notation.
Thus, the above prints out:
-3.17059852116705e+307
showing that Double Max is over-shot in the final calculation, proving as to why infinity is reached.
Well, I've done my learning in public now!
The code below shows two ways of building a spreadsheet :
by using:
str = str + "\(number) ; "
or
str.append("\(number)");
Both are really slow because, I think, they discard both strings and make a third one which is the concatenation of the first two.
Now, If I repeat this operation hundreds of thousands of times to grow a spreadsheet... that makes a lot of allocations.
For instance, the code below takes 11 seconds to execute on my MacBook Pro 2016:
let start = Date()
var str = "";
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
// Use either one, no difference
// str = str + "\(Double(j) * 1.23456789086756 + Double(i)) ; "
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
}
str.append("\n")
}
let duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration);
How can I solve this issue without having to convert the doubles to string myself ? I have been stuck on this for 3 days... my programming skills are pretty limited, as you can probably see from the code above...
I tried:
var str = NSMutableString(capacity: 86400*80*20);
but the compiler tells me:
Variable 'str' was never mutated; consider changing to 'let' constant
despite the
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
So apparently, calling append does not mutate the string...
I tried writing it to an array and the limiting factor seems to be the conversion of a double to a string.
The code below takes 13 seconds or so on my air
doing this
arr[i][j] = "1.23456789086756"
drops the execution time to 2 seconds so 11 seconds is taken up in converting Double to String. You might be able to shave off some time by writing your own conversion routine but that seems the limiting factor. I tried using memory streams and that seems even slower.
var start = Date()
var arr = Array(repeating: Array(repeating: "1.23456789086756", count: 80), count: 86400 )
var duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //0.007
start = Date()
var a = 1.23456789086756
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
arr[i][j] = "\(a)" // "1.23456789086756" //String(a)
}
}
duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //13.46 or 2.3 with the string
This is a LeetCode question. I wrote 4 answers in different versions of that question. When I tried to use "Bit manipulation", I got the error. Since no one in LeetCode can answer my question, and I can't find any Swift doc about this. I thought I would try to ask here.
The question is to get the majority element (>n/2) in a given array. The following code works in other languages like Java, so I think it might be a general question in Swift.
func majorityElement(nums: [Int]) -> Int {
var bit = Array(count: 32, repeatedValue: 0)
for num in nums {
for i in 0..<32 {
if (num>>(31-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<32 {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(31-i))
}
return ret
}
When the input is [-2147483648], the output is 2147483648. But in Java, it can successfully output the right negative number.
Swift doc says :
Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
Well, it is 2,147,483,647, the input is 1 larger than that number. When I ran pow(2.0, 31.0) in playground, it shows 2147483648. I got confused. What's wrong with my code or what did I miss about Swift Int?
A Java int is a 32-bit integer. The Swift Int is 32-bit or 64-bit
depending on the platform. In particular, it is 64-bit on all OS X
platforms where Swift is available.
Your code handles only the lower 32 bits of the given integers, so that
-2147483648 = 0xffffffff80000000
becomes
2147483648 = 0x0000000080000000
So solve the problem, you can either change the function to take 32-bit integers as arguments:
func majorityElement(nums: [Int32]) -> Int32 { ... }
or make it work with arbitrary sized integers by computing the
actual size and use that instead of the constant 32:
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
var bit = Array(count: numBits, repeatedValue: 0)
for num in nums {
for i in 0..<numBits {
if (num>>(numBits-1-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<numBits {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(numBits-1-i))
}
return ret
}
A more Swifty way would be to use map() and reduce()
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
let bitCounts = (0 ..< numBits).map { i in
nums.reduce(0) { $0 + ($1 >> i) & 1 }
}
let major = (0 ..< numBits).reduce(0) {
$0 | (bitCounts[$1] > nums.count/2 ? 1 << $1 : 0)
}
return major
}