Why does this convert Int to Double in Swift playground not work? - swift

f = 802
var df = Double(f / 4)
print(df)
result is 200.0
I expected 200.5

Your expression creates a Double from the division of the integers 820 by 4 which is 200.
If you want a floating point division you have to do the conversion before the division. Or simpler without a conversion declare f as Double
let f = 802.0
let df = f / 4 // 200.5
It's a good practice anyway to declare numeric literals as the actual type. I would even write
let df = f / 4.0
The benefit is that the compiler complains if the types don't match

This is a common bug and easy mistake to make. When you want a floating point result, you need to ensure that the operands of your arithmetic expressions are floating point numbers instead of integers. This will produce the expected result of 200.5:
var df = Double(f) / 4.0
(Edit: if your variable f really is going to be a hard-coded constant 802, I actually recommend vadian's solution of declaring f itself as Double rather than Int.)
A more detailed explanation:
Looking at the order of operations of var df = Double(f / 4):
The innermost expression is f / 4. This is evaluated first. f and 4 are both integers, so this is calculated using integer division which rounds down, so 802/4 => 200.
Then the result 200 is used in the Double() conversion, thus the result of 200.0. Finally, the result is assigned to the newly-declared variable df, which Swift infers to have the type Double based on the expression to the right of the equals sign.
Compare this to var df = Double(f) / 4.0: the Double(f) is evaluated first, converting the integer 802 to a double value 802.0. Now the division is performed, and since both operands of the division sign are floating point, floating-point division is performed and you get the result 802.0 / 4.0 => 200.5. This result is a Double value, so the variable df is declared to be a Double and assigned the value 200.5.
Some other approaches that don't work:
var df = f / 4: f and 4 are both integers, integer division is performed automatically, and df is now a variable of type Int with value 200
var df: Double = f / 4: trying to explicitly declare df as Double will produce a compiler error. The right side of the equals sign is still an integer division operation, and Swift won't automatically cast from Integer to Double, it wants you to explicitly decide how to cast
var df = f / 4.0: in some languages, this type of expression would automatically convert f to a Double and thus perform floating-point division like you want. But again Swift will not automatically convert and wants you to be explicit…this leads to my recommended solution of Double(f)/4.0

In your example you are dividing Integers and then casts to double.
Fix:
f = 802
var df = Double(f) / 4
print(df)

Related

Why is Swift inferring double values when using literals?

I understand that Swift is very strict about types and does not implicitly cast one type to another.
Why does this not generate an error and the output is an array of doubles?
let myDoubles = [Double.Pi, 42]
But this does?
let fortyTwoInt = 42;
let myDoubles = [Double.Pi, one]
Why does it implicitly cast 42 to 42.0 in the first example? And if it is not casting, what else is happening?
Simpler example:
let someFloat = 2.0 + 2
versus
let twoInt = 2
let someFloat = 2.0 + twoInt
Again, the latter one does not work.
"Cast" is not really the right word here. It interprets the character sequence "4" and "2" as an integer literal. Double conforms to ExpressibleByIntegerLiteral, so the Double can be constructed from it.
fortyTwoInt is not an integer literal. It's an Int. And Swift will not automatically convert between Int and Double.

Swift type inference and basic addition

Pretty new to Swift and learning about data types.
let partNumber = 3.2
let wholeNumber = 2
partNumber + wholeNumber //Binary operator '+' cannot be applied to operands of type 'Double' and 'Int'
3.2 + 2 // outputs 5.2
I understand that partNumber is a Double type and wholeNumber is an Int. What I don't understand is why playground errors out when I attempt to add both constants together. To add confusion the addition works when not assigned as a constant.
The + operator does not support adding a Double and and Integer together in this way
If you change up your code to make sure wholeNumber is a Double type, then it'll work
let partNumber = 3.2
let wholeNumber: Double = 2
let result = partNumber + wholeNumber
This is all covered in the Swift book under Numeric Type Conversion.
Some relevant quotes from the subsection titled "Integer and Floating-Point Conversion":
Conversions between integer and floating-point numeric types must be made explicit
This is followed by an example similar to your code. Your code needs a cast:
let partNumber = 3.2
let wholeNumber = 2
partNumber + Double(wholeNumber)
and:
The rules for combining numeric constants and variables are different from the rules for numeric literals. The literal value 3 can be added directly to the literal value 0.14159, because number literals don’t have an explicit type in and of themselves. Their type is inferred only at the point that they’re evaluated by the compiler.
Which covers the second part of your question.
To add confusion the addition works when not assigned as a constant.
That doesn't "add to the confusion" at all. It's the answer. There is implicit coercion between numeric types for literals (what you call a "constant") but not for variables. It's as simple as that.

convert number string into float with specific precision (without getting rounding errors)

I have a vector of cells (say, size of 50x1, called tokens) , each of which is a struct with properties x,f1,f2 which are strings representing numbers. for example, tokens{15} gives:
x: "-1.4343429"
f1: "15.7947111"
f2: "-5.8196158"
and I am trying to put those numbers into 3 vectors (each is also 50x1) whose type is float. So I create 3 vectors:
x = zeros(50,1,'single');
f1 = zeros(50,1,'single');
f2 = zeros(50,1,'single');
and that works fine (why wouldn't it?). But then when I try to populate those vectors: (L is a for loop index)
x(L)=tokens{L}.x;
.. also for the other 2
I get :
The following error occurred converting from string to single:
Conversion to single from string is not possible.
Which I can understand; implicit conversion doesn't work for single. It does work if x, f1 and f2 are of type 50x1 double.
The reason I am doing it with floats is because the data I get is from a C program which writes the some floats into a file to be read by matlab. If I try to convert the values into doubles in the C program I get rounding errors...
So, (after what I hope is a good question,) how might I be able to get the numbers in those strings, at the right precision? (all the strings have the same number of decimal places: 7).
The MCVE:
filedata = fopen('fname1.txt','rt');
%fname1.txt is created by a C program. I am quite sure that the problem isn't there.
scanned = textscan(filedata,'%s','Delimiter','\n');
raw = scanned{1};
stringValues = strings(50,1);
for K=1:length(raw)
stringValues(K)=raw{K};
end
clear K %purely for convenience
regex = 'x=(?<x>[\-\.0-9]*),f1=(?<f1>[\-\.0-9]*),f2=(?<f2>[\-\.0-9]*)';
tokens = regexp(stringValues,regex,'names');
x = zeros(50,1,'single');
f1 = zeros(50,1,'single');
f2 = zeros(50,1,'single');
for L=1:length(tokens)
x(L)=tokens{L}.x;
f1(L)=tokens{L}.f1;
f2(L)=tokens{L}.f2;
end
Use function str2double before assigning into yours arrays (and then cast it to single if you want). Strings (char arrays) must be explicitely converted to numbers before using them as numbers.

java double to BigDecimal convertion adding extra trailing zero

double d =0.0000001;
BigDecimal d1 = BigDecimal.valueOf(d);
System.out.println(d1.toPlainString());
prints output => 0.00000010 i am expecting 0.0000001
First, to get a precise BigDecimal value, avoid using a float or double to initialize it. The conversion from text to floating point type is by definition not guaranteed to be precise. The conversion from such a type to BigDecimal is precise, but may not exactly yield the result you expected.
If you can avoid using a double or float for initialization, then do something like this instead:
BigDecimal d = new BigDecimal("0.0000001");
This will already give you the desired output, because the conversion from (decimal) text to BigDecimal is precise.
But if you really must convert from a double, then try:
double d = 0.0000001;
BigDecimal d1 = BigDecimal.valueOf(d);
System.out.println(d1.stripTrailingZeros().toPlainString());
This will strip (remove) any trailing zeroes.
You can use a DecimalFormat to format your BigDecimal.
double d = 0.0000001;
BigDecimal d1 = BigDecimal.valueOf(d);
DecimalFormat df = new DecimalFormat("#0.0000000");
System.out.println(df.format(d1));

Swift float multiplication error

This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.