Really fast addition of Strings in swift - swift

The code below shows two ways of building a spreadsheet :
by using:
str = str + "\(number) ; "
or
str.append("\(number)");
Both are really slow because, I think, they discard both strings and make a third one which is the concatenation of the first two.
Now, If I repeat this operation hundreds of thousands of times to grow a spreadsheet... that makes a lot of allocations.
For instance, the code below takes 11 seconds to execute on my MacBook Pro 2016:
let start = Date()
var str = "";
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
// Use either one, no difference
// str = str + "\(Double(j) * 1.23456789086756 + Double(i)) ; "
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
}
str.append("\n")
}
let duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration);
How can I solve this issue without having to convert the doubles to string myself ? I have been stuck on this for 3 days... my programming skills are pretty limited, as you can probably see from the code above...
I tried:
var str = NSMutableString(capacity: 86400*80*20);
but the compiler tells me:
Variable 'str' was never mutated; consider changing to 'let' constant
despite the
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
So apparently, calling append does not mutate the string...

I tried writing it to an array and the limiting factor seems to be the conversion of a double to a string.
The code below takes 13 seconds or so on my air
doing this
arr[i][j] = "1.23456789086756"
drops the execution time to 2 seconds so 11 seconds is taken up in converting Double to String. You might be able to shave off some time by writing your own conversion routine but that seems the limiting factor. I tried using memory streams and that seems even slower.
var start = Date()
var arr = Array(repeating: Array(repeating: "1.23456789086756", count: 80), count: 86400 )
var duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //0.007
start = Date()
var a = 1.23456789086756
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
arr[i][j] = "\(a)" // "1.23456789086756" //String(a)
}
}
duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //13.46 or 2.3 with the string

Related

MongoDB findOne() return 404 "Not found" in Postman but in commend line it comes out [duplicate]

How do I convert a string to an integer in JavaScript?
The simplest way would be to use the native Number function:
var x = Number("1000")
If that doesn't work for you, then there are the parseInt, unary plus, parseFloat with floor, and Math.round methods.
parseInt()
var x = parseInt("1000", 10); // You want to use radix 10
// So you get a decimal number even with a leading 0 and an old browser ([IE8, Firefox 20, Chrome 22 and older][1])
Unary plus
If your string is already in the form of an integer:
var x = +"1000";
floor()
If your string is or might be a float and you want an integer:
var x = Math.floor("1000.01"); // floor() automatically converts string to number
Or, if you're going to be using Math.floor several times:
var floor = Math.floor;
var x = floor("1000.01");
parseFloat()
If you're the type who forgets to put the radix in when you call parseInt, you can use parseFloat and round it however you like. Here I use floor.
var floor = Math.floor;
var x = floor(parseFloat("1000.01"));
round()
Interestingly, Math.round (like Math.floor) will do a string to number conversion, so if you want the number rounded (or if you have an integer in the string), this is a great way, maybe my favorite:
var round = Math.round;
var x = round("1000"); // Equivalent to round("1000", 0)
Try parseInt function:
var number = parseInt("10");
But there is a problem. If you try to convert "010" using parseInt function, it detects as octal number, and will return number 8. So, you need to specify a radix (from 2 to 36). In this case base 10.
parseInt(string, radix)
Example:
var result = parseInt("010", 10) == 10; // Returns true
var result = parseInt("010") == 10; // Returns false
Note that parseInt ignores bad data after parsing anything valid.
This guid will parse as 51:
var result = parseInt('51e3daf6-b521-446a-9f5b-a1bb4d8bac36', 10) == 51; // Returns true
There are two main ways to convert a string to a number in JavaScript. One way is to parse it and the other way is to change its type to a Number. All of the tricks in the other answers (e.g., unary plus) involve implicitly coercing the type of the string to a number. You can also do the same thing explicitly with the Number function.
Parsing
var parsed = parseInt("97", 10);
parseInt and parseFloat are the two functions used for parsing strings to numbers. Parsing will stop silently if it hits a character it doesn't recognise, which can be useful for parsing strings like "92px", but it's also somewhat dangerous, since it won't give you any kind of error on bad input, instead you'll get back NaN unless the string starts with a number. Whitespace at the beginning of the string is ignored. Here's an example of it doing something different to what you want, and giving no indication that anything went wrong:
var widgetsSold = parseInt("97,800", 10); // widgetsSold is now 97
It's good practice to always specify the radix as the second argument. In older browsers, if the string started with a 0, it would be interpreted as octal if the radix wasn't specified which took a lot of people by surprise. The behaviour for hexadecimal is triggered by having the string start with 0x if no radix is specified, e.g., 0xff. The standard actually changed with ECMAScript 5, so modern browsers no longer trigger octal when there's a leading 0 if no radix has been specified. parseInt understands radixes up to base 36, in which case both upper and lower case letters are treated as equivalent.
Changing the Type of a String to a Number
All of the other tricks mentioned above that don't use parseInt, involve implicitly coercing the string into a number. I prefer to do this explicitly,
var cast = Number("97");
This has different behavior to the parse methods (although it still ignores whitespace). It's more strict: if it doesn't understand the whole of the string than it returns NaN, so you can't use it for strings like 97px. Since you want a primitive number rather than a Number wrapper object, make sure you don't put new in front of the Number function.
Obviously, converting to a Number gives you a value that might be a float rather than an integer, so if you want an integer, you need to modify it. There are a few ways of doing this:
var rounded = Math.floor(Number("97.654")); // other options are Math.ceil, Math.round
var fixed = Number("97.654").toFixed(0); // rounded rather than truncated
var bitwised = Number("97.654")|0; // do not use for large numbers
Any bitwise operator (here I've done a bitwise or, but you could also do double negation as in an earlier answer or a bit shift) will convert the value to a 32 bit integer, and most of them will convert to a signed integer. Note that this will not do want you want for large integers. If the integer cannot be represented in 32 bits, it will wrap.
~~"3000000000.654" === -1294967296
// This is the same as
Number("3000000000.654")|0
"3000000000.654" >>> 0 === 3000000000 // unsigned right shift gives you an extra bit
"300000000000.654" >>> 0 === 3647256576 // but still fails with larger numbers
To work correctly with larger numbers, you should use the rounding methods
Math.floor("3000000000.654") === 3000000000
// This is the same as
Math.floor(Number("3000000000.654"))
Bear in mind that coercion understands exponential notation and Infinity, so 2e2 is 200 rather than NaN, while the parse methods don't.
Custom
It's unlikely that either of these methods do exactly what you want. For example, usually I would want an error thrown if parsing fails, and I don't need support for Infinity, exponentials or leading whitespace. Depending on your use case, sometimes it makes sense to write a custom conversion function.
Always check that the output of Number or one of the parse methods is the sort of number you expect. You will almost certainly want to use isNaN to make sure the number is not NaN (usually the only way you find out that the parse failed).
ParseInt() and + are different
parseInt("10.3456") // returns 10
+"10.3456" // returns 10.3456
Fastest
var x = "1000"*1;
Test
Here is little comparison of speed (macOS only)... :)
For Chrome, 'plus' and 'mul' are fastest (>700,000,00 op/sec), 'Math.floor' is slowest. For Firefox, 'plus' is slowest (!) 'mul' is fastest (>900,000,000 op/sec). In Safari 'parseInt' is fastest, 'number' is slowest (but results are quite similar, >13,000,000 <31,000,000). So Safari for cast string to int is more than 10x slower than other browsers. So the winner is 'mul' :)
You can run it on your browser by this link
https://jsperf.com/js-cast-str-to-number/1
I also tested var x = ~~"1000";. On Chrome and Safari, it is a little bit slower than var x = "1000"*1 (<1%), and on Firefox it is a little bit faster (<1%).
I use this way of converting string to number:
var str = "25"; // String
var number = str*1; // Number
So, when multiplying by 1, the value does not change, but JavaScript automatically returns a number.
But as it is shown below, this should be used if you are sure that the str is a number (or can be represented as a number), otherwise it will return NaN - not a number.
You can create simple function to use, e.g.,
function toNumber(str) {
return str*1;
}
Try parseInt.
var number = parseInt("10", 10); //number will have value of 10.
I love this trick:
~~"2.123"; //2
~~"5"; //5
The double bitwise negative drops off anything after the decimal point AND converts it to a number format. I've been told it's slightly faster than calling functions and whatnot, but I'm not entirely convinced.
Another method I just saw here (a question about the JavaScript >>> operator, which is a zero-fill right shift) which shows that shifting a number by 0 with this operator converts the number to a uint32 which is nice if you also want it unsigned. Again, this converts to an unsigned integer, which can lead to strange behaviors if you use a signed number.
"-2.123" >>> 0; // 4294967294
"2.123" >>> 0; // 2
"-5" >>> 0; // 4294967291
"5" >>> 0; // 5
In JavaScript, you can do the following:
ParseInt
parseInt("10.5") // Returns 10
Multiplying with 1
var s = "10";
s = s*1; // Returns 10
Using the unary operator (+)
var s = "10";
s = +s; // Returns 10
Using a bitwise operator
(Note: It starts to break after 2140000000. Example: ~~"2150000000" = -2144967296)
var s = "10.5";
s = ~~s; // Returns 10
Using Math.floor() or Math.ceil()
var s = "10";
s = Math.floor(s) || Math.ceil(s); // Returns 10
Please see the below example. It will help answer your question.
Example Result
parseInt("4") 4
parseInt("5aaa") 5
parseInt("4.33333") 4
parseInt("aaa"); NaN (means "Not a Number")
By using parseint function, it will only give op of integer present and not the string.
Beware if you use parseInt to convert a float in scientific notation!
For example:
parseInt("5.6e-14")
will result in
5
instead of
0
Also as a side note: MooTools has the function toInt() which is used on any native string (or float (or integer)).
"2".toInt() // 2
"2px".toInt() // 2
2.toInt() // 2
We can use +(stringOfNumber) instead of using parseInt(stringOfNumber).
Example: +("21") returns int of 21, like the parseInt("21").
We can use this unary "+" operator for parsing float too...
To convert a String into Integer, I recommend using parseFloat and not parseInt. Here's why:
Using parseFloat:
parseFloat('2.34cms') //Output: 2.34
parseFloat('12.5') //Output: 12.5
parseFloat('012.3') //Output: 12.3
Using parseInt:
parseInt('2.34cms') //Output: 2
parseInt('12.5') //Output: 12
parseInt('012.3') //Output: 12
So if you have noticed parseInt discards the values after the decimals, whereas parseFloat lets you work with floating point numbers and hence more suitable if you want to retain the values after decimals. Use parseInt if and only if you are sure that you want the integer value.
There are many ways in JavaScript to convert a string to a number value... All are simple and handy. Choose the way which one works for you:
var num = Number("999.5"); //999.5
var num = parseInt("999.5", 10); //999
var num = parseFloat("999.5"); //999.5
var num = +"999.5"; //999.5
Also, any Math operation converts them to number, for example...
var num = "999.5" / 1; //999.5
var num = "999.5" * 1; //999.5
var num = "999.5" - 1 + 1; //999.5
var num = "999.5" - 0; //999.5
var num = Math.floor("999.5"); //999
var num = ~~"999.5"; //999
My prefer way is using + sign, which is the elegant way to convert a string to number in JavaScript.
Try str - 0 to convert string to number.
> str = '0'
> str - 0
0
> str = '123'
> str - 0
123
> str = '-12'
> str - 0
-12
> str = 'asdf'
> str - 0
NaN
> str = '12.34'
> str - 0
12.34
Here are two links to compare the performance of several ways to convert string to int
https://jsperf.com/number-vs-parseint-vs-plus
http://phrogz.net/js/string_to_number.html
Here is the easiest solution
let myNumber = "123" | 0;
More easy solution
let myNumber = +"123";
In my opinion, no answer covers all edge cases as parsing a float should result in an error.
function parseInteger(value) {
if(value === '') return NaN;
const number = Number(value);
return Number.isInteger(number) ? number : NaN;
}
parseInteger("4") // 4
parseInteger("5aaa") // NaN
parseInteger("4.33333") // NaN
parseInteger("aaa"); // NaN
The easiest way would be to use + like this
const strTen = "10"
const numTen = +strTen // string to number conversion
console.log(typeof strTen) // string
console.log(typeof numTen) // number
I actually needed to "save" a string as an integer, for a binding between C and JavaScript, so I convert the string into an integer value:
/*
Examples:
int2str( str2int("test") ) == "test" // true
int2str( str2int("t€st") ) // "t¬st", because "€".charCodeAt(0) is 8364, will be AND'ed with 0xff
Limitations:
maximum 4 characters, so it fits into an integer
*/
function str2int(the_str) {
var ret = 0;
var len = the_str.length;
if (len >= 1) ret += (the_str.charCodeAt(0) & 0xff) << 0;
if (len >= 2) ret += (the_str.charCodeAt(1) & 0xff) << 8;
if (len >= 3) ret += (the_str.charCodeAt(2) & 0xff) << 16;
if (len >= 4) ret += (the_str.charCodeAt(3) & 0xff) << 24;
return ret;
}
function int2str(the_int) {
var tmp = [
(the_int & 0x000000ff) >> 0,
(the_int & 0x0000ff00) >> 8,
(the_int & 0x00ff0000) >> 16,
(the_int & 0xff000000) >> 24
];
var ret = "";
for (var i=0; i<4; i++) {
if (tmp[i] == 0)
break;
ret += String.fromCharCode(tmp[i]);
}
return ret;
}
String to Number in JavaScript:
Unary + (most recommended)
+numStr is easy to use and has better performance compared with others
Supports both integers and decimals
console.log(+'123.45') // => 123.45
Some other options:
Parsing Strings:
parseInt(numStr) for integers
parseFloat(numStr) for both integers and decimals
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
JavaScript Functions
Math functions like round(numStr), floor(numStr), ceil(numStr) for integers
Number(numStr) for both integers and decimals
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
Unary Operators
All basic unary operators, +numStr, numStr-0, 1*numStr, numStr*1, and numStr/1
All support both integers and decimals
Be cautious about numStr+0. It returns a string.
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // =>3.3
console.log('123.123'+0, typeof ('123.123' + 0)) // => 123.1230 string
Bitwise Operators
Two tilde ~~numStr or left shift 0, numStr<<0
Supports only integers, but not decimals
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
// Parsing
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
// Function
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
// Unary
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // => 3.3
console.log('123.123'+0, typeof ('123.123'+0)) // => 123.1230 string
// Bitwise
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
function parseIntSmarter(str) {
// ParseInt is bad because it returns 22 for "22thisendsintext"
// Number() is returns NaN if it ends in non-numbers, but it returns 0 for empty or whitespace strings.
return isNaN(Number(str)) ? NaN : parseInt(str, 10);
}
You can use plus.
For example:
var personAge = '24';
var personAge1 = (+personAge)
then you can see the new variable's type bytypeof personAge1 ; which is number.
Summing the multiplication of digits with their respective power of ten:
i.e: 123 = 100+20+3 = 1100 + 2+10 + 31 = 1*(10^2) + 2*(10^1) + 3*(10^0)
function atoi(array) {
// Use exp as (length - i), other option would be
// to reverse the array.
// Multiply a[i] * 10^(exp) and sum
let sum = 0;
for (let i = 0; i < array.length; i++) {
let exp = array.length - (i+1);
let value = array[i] * Math.pow(10, exp);
sum += value;
}
return sum;
}
The safest way to ensure you get a valid integer:
let integer = (parseInt(value, 10) || 0);
Examples:
// Example 1 - Invalid value:
let value = null;
let integer = (parseInt(value, 10) || 0);
// => integer = 0
// Example 2 - Valid value:
let value = "1230.42";
let integer = (parseInt(value, 10) || 0);
// => integer = 1230
// Example 3 - Invalid value:
let value = () => { return 412 };
let integer = (parseInt(value, 10) || 0);
// => integer = 0
Another option is to double XOR the value with itself:
var i = 12.34;
console.log('i = ' + i);
console.log('i ⊕ i ⊕ i = ' + (i ^ i ^ i));
This will output:
i = 12.34
i ⊕ i ⊕ i = 12
I only added one plus(+) before string and that was solution!
+"052254" // 52254
Number()
Number(" 200.12 ") // Returns 200.12
Number("200.12") // Returns 200.12
Number("200") // Returns 200
parseInt()
parseInt(" 200.12 ") // Return 200
parseInt("200.12") // Return 200
parseInt("200") // Return 200
parseInt("Text information") // Returns NaN
parseFloat()
It will return the first number
parseFloat("200 400") // Returns 200
parseFloat("200") // Returns 200
parseFloat("Text information") // Returns NaN
parseFloat("200.10") // Return 200.10
Math.floor()
Round a number to the nearest integer
Math.floor(" 200.12 ") // Return 200
Math.floor("200.12") // Return 200
Math.floor("200") // Return 200
function doSth(){
var a = document.getElementById('input').value;
document.getElementById('number').innerHTML = toNumber(a) + 1;
}
function toNumber(str){
return +str;
}
<input id="input" type="text">
<input onclick="doSth()" type="submit">
<span id="number"></span>
This (probably) isn't the best solution for parsing an integer, but if you need to "extract" one, for example:
"1a2b3c" === 123
"198some text2hello world!30" === 198230
// ...
this would work (only for integers):
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (!isNaN(str[i - 1])) {
result += parseInt(str[i - 1]) * factor
factor *= 10
}
}
return result
}
console.log(extractInteger(str))
Of course, this would also work for parsing an integer, but would be slower than other methods.
You could also parse integers with this method and return NaN if the string isn't a number, but I don't see why you'd want to since this relies on parseInt internally and parseInt is probably faster.
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (isNaN(str[i - 1])) return NaN
result += parseInt(str[i - 1]) * factor
factor *= 10
}
return result
}
console.log(extractInteger(str))

Difficulty getting readLine() to work as desired on HackerRank

I'm attempting to submit the HackerRank Day 6 Challenge for 30 Days of Code.
I'm able to complete the task without issue in an Xcode Playground, however HackerRank's site says there is no output from my method. I encountered an issue yesterday due to browser flakiness, but cleaning caches, switching from Safari to Chrome, etc. don't seem to resolve the issue I'm encountering here. I think my problem lies in inputString.
Task
Given a string, S, of length N that is indexed from 0 to N-1, print its even-indexed and odd-indexed characters as 2 space-separated strings on a single line (see the Sample below for more detail).
Input Format
The first line contains an integer, (the number of test cases).
Each line of the subsequent lines contain a String, .
Constraints
1 <= T <= 10
2 <= length of S < 10,000
Output Format
For each String (where 0 <= j <= T-1), print S's even-indexed characters, followed by a space, followed by S's odd-indexed characters.
This is the code I'm submitting:
import Foundation
let inputString = readLine()!
func tweakString(string: String) {
// split string into an array of lines based on char set
var lineArray = string.components(separatedBy: .newlines)
// extract the number of test cases
let testCases = Int(lineArray[0])
// remove the first line containing the number of unit tests
lineArray.remove(at: 0)
/*
Satisfy constraints specified in the task
*/
guard lineArray.count >= 1 && lineArray.count <= 10 && testCases == lineArray.count else { return }
for line in lineArray {
switch line.characters.count {
// to match constraint specified in the task
case 2...10000:
let characterArray = Array(line.characters)
let evenCharacters = characterArray.enumerated().filter({$0.0 % 2 == 0}).map({$0.1})
let oddCharacters = characterArray.enumerated().filter({$0.0 % 2 == 1}).map({$0.1})
print(String(evenCharacters) + " " + String(oddCharacters))
default:
break
}
}
}
tweakString(string: inputString)
I think my issue lies the inputString. I'm taking it "as-is" and formatting it within my method. I've found solutions for Day 6, but I can't seem to find any current ones in Swift.
Thank you for reading. I welcome thoughts on how to get this thing to pass.
readLine() reads a single line from standard input, which
means that your inputString contains only the first line from
the input data. You have to call readLine() in a loop to get
the remaining input data.
So your program could look like this:
func tweakString(string: String) -> String {
// For a single input string, compute the output string according to the challenge rules ...
return result
}
let N = Int(readLine()!)! // Number of test cases
// For each test case:
for _ in 1...N {
let input = readLine()!
let output = tweakString(string: input)
print(output)
}
(The forced unwraps are acceptable here because the format of
the input data is documented in the challenge description.)
Hi Adrian you should call readLine()! every row . Here an example answer for that challenge;
import Foundation
func letsReview(str:String){
var evenCharacters = ""
var oddCharacters = ""
var index = 0
for char in str.characters{
if index % 2 == 0 {
evenCharacters += String(char)
}
else{
oddCharacters += String(char)
}
index += 1
}
print (evenCharacters + " " + oddCharacters)
}
let rowCount = Int(readLine()!)!
for _ in 0..<rowCount {
letsReview(str:String(readLine()!)!)
}

Why does Double reach `Double.infinity` BEFORE Double Max is reached?

I wrote a tiny Swift programme to add a number to the previous number until it reached infinity. However, infinity is reached BEFORE the Double Maximum is reached.
Double limit is 1.79769313486232e+308
Distance to limit is 4.90703911098917e+307
Yet, 8.07763763215622e+307 + 1.3069892237634e+308 reached infinity
Why is this? (I answered this below.)
Run it for yourselves:
import Foundation
import Darwin
var current: Double = 1
var previous: Double = 0
var register: Double = 0
var infinity = Double.infinity
var isInfinite = infinity.isInfinite
var n = 1
while current < infinity {
register = current
current = previous + register
print("\(n): \(current)")
guard current != infinity else { break }
previous = register
n += 1
}
print("\n")
print("Double limit is \(DBL_MAX)")
print("Distance to limit is \(DBL_MAX - register)")
print("Yet, \(previous) + \(register) reached infinity")
After adding:
print((DBL_MAX - register) - previous)
to the end of my code, I realised my error is not fully grasping e+ notation.
Thus, the above prints out:
-3.17059852116705e+307
showing that Double Max is over-shot in the final calculation, proving as to why infinity is reached.
Well, I've done my learning in public now!

Learning swift, Issues with incrementing variables

I'm back again with what is likely a simple issue, however its got me stumped.
I've written very small, very basic piece of code in an xcode playground.
My code simply iterates over a function 10 times, printing the output each time.
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
Now for my issue, It seems my code properly increments the 'start' variable.... running and printing 10 times. However it prints out a list of 0's. For some reason the 'x' variable isn't incrementing.
I've consulted the few ebooks I have for swift, aswell as the documentation, and as far as i can see my code should work.
Any ideas?
P.s. As per the documentation I have also tried ++x, to no avail.
edit
Updated, working code thanks to answers below:
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print("The variable is", x, "and doubled it is", answer)
}
while start <= 10 {
spin()
x++
start++
answer = 2 * x
}
You have just assigned 2 * x to answer at the beginning of the program, when x == 0, and the value of answer remains its initial value through out the program. That's how Value Types work in Swift as well as in almost any other languages
If you wish to always have answer to be 2 times of x, you should write like this
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
answer = 2 * x
}
And thanks to Leo Dabus's answer, you may also define a Computed Property to caculate the value of 2 * x each time you try to get the value of answer. In this way, answer becomes readonly and you cannot assign other values to it. And each time you try to get the value of answer, it performs the 2 * x calculation.
var start = 0
var x = 0
var answer: Int {
return 2 * x
}
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
What you need is a read only computed property. Try like this:
var answer: Int { return 2 * x }

Why can't I divide integers correctly within reduce in Swift?

I'm trying to get the average of an array of Ints using the following code:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 / numbers.count }
print(avg) // 1
Which is obviously incorrect. However, if I remove the division to the outside of the closure:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 } / numbers.count
print(avg) // 3
Bingo! I think I remember reading somewhere (can't recall if it was in relation to Swift, JavaScript or programming math in general) that this has something to do with the fact that dividing the sum by the length yields a float / double e.g. (1 + 2) / 5 = 0.6 which will be rounded down within the sum to 0. However I would expect ((1 + 2) + 3) / 5 = 1.2 to return 1, however it too seems to return 0.
With doubles, the calculation works as expected whichever way it's calculated, as long as I box the count integer to a double:
let numbers = [1.0,2.0,3.0,4.0,5.0]
let avg = numbers.reduce(0) { return $0 + $1 / Double(numbers.count) }
print(avg) // 3
I think I understand the why (maybe not?). But I can't come up with a solid example to prove it.
Any help and / or explanation is very much appreciated. Thanks.
The division does not yield a double; you're doing integer division.
You're not getting ((1 + 2) + 3 etc.) / 5.
In the first case, you're getting (((((0 + (1/5 = 0)) + (2/5 = 0)) + (3/5 = 0)) + (4/5 = 0)) + (5/5 = 1)) = 0 + 0 + 0 + 0 + 0 + 1 = 1.
In the second case, you're getting ((((((0 + 1) + 2) + 3) + 4) + 5) / 5) = 15 / 5 = 3.
In the third case, double precision loss is much smaller than the integer, and you get something like (((((0 + (1/5.0 = 0.2)) + (2/5.0 = 0.4)) + (3/5.0 = 0.6)) + (4/5.0 = 0.8)) + (5/5.0 = 1.0)).
The problem is that what you are attempting with the first piece of code does not make sense mathematically.
The average of a sequence is the sum of the entire sequence divided by the number of elements.
reduce calls the lambda function for every member of the collection it is being called on. Thus you are summing and dividing all the way through.
For people finding it hard to understand the original answer.
Consider.
let x = 4
let y = 3
let answer = x/y
You expect the answer to be a Double, but no, it is an Int. For you to get an answer which is not a rounded down Int. You must explicitly state the values to be Double. See below
let doubleAnswer = Double(x)/Double(y)
Hope this helped.