inputting decimals in ipad calc with objective c - iphone

I'm a complete novice to objective C/ipad development, but I am trying to build a simple four function calculator on the iPad.
My problem is trying to display a number with a decimal in it: i.e. 5.273 or .0021
My entire calculator is float, but when doing
[resultfield setText: NSString stringWithFormat:#"%lf", display]]
the output (say the one button was pressed) would be: 1.00000. I know that if you change the format to #".0lf" all the extra zeroes will disappear and the output would be 1
However, that doesn't help me when doing trying to input a decimal number, or when doing division. If I divide 10/3 the result would be displayed as 3 rather than 3.333333333
Any ideas on how to keep my numbers as float but not display trailing zeroes?

Check out the NSNumberFormatter class. It will let you set rounding and truncation of your decimals.

Related

How To Only Show Certain Number of numbers after decimal point Double Swift

I am using doubles to log hours for my app. My problem is I only want to record the number and the first number after the decimal place otherwise sometimes the number becomes like this (x.9000000000001) and I only need the x.9.
I have tried rounding the double value but it still has this weird extra amount of zeros.
Any way to only get the double to show the first number after decimal place.
Thanks
The easiest way to achieve rounding to the first decimal place is to simply do the following:
let x = 4.9000000001
let roundedX = Double(round(x * 10) / 10) // roundedX = 4.9
roundedX will be a Double representing x rounded to the first decimal. To get 2 decimal places, just multiply and divide by 100 instead of 10.

Why did MATLAB delete my decimals?

Let's say I create some number A, of the order 10^4:
A = 81472.368639;
disp(A)
8.1472e+04
That wasn't what I wanted. Where are my decimals? There should be six decimals more. Checking the variable editor shows me this:
Again, I lost my decimals. How do I keep these for further calculations?
Scientific notation, or why you didn't lose any decimals
You didn't lose any decimals, this is just MATLAB's way of displaying large numbers. MATLAB rounds the display of numbers, both in the command window and in the variable editor, to one digit before the dot and four after that, using scientific notation. Scientific notation is the Xe+y notation, where X is some number, and y an integer. This means X times 10 to the power of y, which can be visualised as "shift the dot to the right for y places" (or to the left if y is negative).
Force MATLAB to show you all your decimals
Now that we know what MATLAB does, can we force it to show us our number? Of course, there're several options for that, the easiest is setting a longer format. The most used for displaying long numbers are format long and format longG, whose difference is apparent when we use them:
format long
A
A =
8.1472368639e+04
format longG
A
A =
81472.368639
format long displays all decimals (up to 16 total) using scientific notation, format longG tries to display numbers without scientific notation but with most available decimals, again: as many as there are or up to 16 digits, both before and after the dot, in total.
A more fancy solution is using disp(sprintf()) or fprintf if you want an exact number of decimals before the dot, after the dot, or both:
fprintf('A = %5.3f\n',A) % \n is just to force a line break
A = 81472.369
disp(sprintf('A = %5.2f\n',A))
A = 81472.37
Finally, remember the variable editor? How do we get that to show our variable completely? Simple: click on the cell containing the number:
So, in short: we didn't lose any decimals along the way, MATLAB still stores them internally, it just displays less decimals by default.
Other uses of format
format has another nice property in that you can set format compact, which gets rid of all the additional empty lines which MATLAB normally adds in the command window:
format compact
format long
A
A =
8.147236863931789e+04
format longG
A
A =
81472.3686393179
which in my opinion is very handy when you don't want to make your command window very big, but don't want to scroll a lot either.
format shortG and format longG are useful when your array has very different numbers in them:
b = 10.^(-3:3);
A.*b
ans =
1.0e+07 *
0.0000 0.0001 0.0008 0.0081 0.0815 0.8147 8.1472
format longG
A.*b
ans =
Columns 1 through 3
81.472368639 814.72368639 8147.2368639
Columns 4 through 6
81472.368639 814723.68639 8147236.8639
Column 7
81472368.639
format shortG
A.*b
ans =
81.472 814.72 8147.2 81472 8.1472e+05 8.1472e+06 8.1472e+07
i.e. they work like long and short on single numbers, but chooses the most convenient display format for each of the numbers.
There's a few more exotic options, like shortE, shortEng, hex etc, but those you can find well documented in The MathWork's own documentation on format.

Why this happens in floating point conversion?

I noticed that some floating points converted differently. This question helps me about floating points however still don't know why this happens? I added two screenshots from debug mode about the sample code. Example values : 7.37 and 9.37. Encountered it in swift and surely swift uses IEEE 754 floating point standard Pls explain how this happens? How conversion ended differently ?
if let text = textField.text {
if let number = formatter.number(from: text) {
return Double(number)
}
return nil
}
Double floating point numbers are stored in base-2, and cannot represent all decimals exactly.
In this case, 7.37 and 9.37 are rounded to the nearest floating point numbers which are 7.37000000000000010658141036401502788066864013671875 and 9.3699999999999992184029906638897955417633056640625, respectively.
Of course, such decimal representations are too unwieldy for general use, so programming languages typically print shorter approximate decimal representations. Two popular choices are
The shortest string that will be correctly rounded to the original number (which in this case are 7.37 and 9.37, respectively).
Rounded 17 significant digits, which is guaranteed to give the correct value when converting back to binary.
These appear to correspond to the 2 debug output values that you are seeing.

How to multiply numbers in Swift?

I am working on a calculator in Swift, but I have a small problem:
when multiplying two numbers, I don't have the same results as a regular calculator.
For example:
In Swift :
0.333328247070312 * 16 = 5.33325195312499
In a regular calculator:
0.333328247070312 * 16 = 5.333251953
What should I do to get the same results as a regular calculator in Swift?
Your "regular calculator" seems wrong or weird in result printing, so, you should not rely on it. I've rechecked your calculation in Python3 which is known to calculate all in binary64 double and print the most exact decimal form:
>>> 0.333328247070312 * 16
5.333251953124992
it's even more detailed (by one digit) than Swift output. Your output also can't be verified as binary32 calculation, because the latter has ~7 correct decimal digits, and usually isn't printed with more digits. What this calculator is? I'd suppose some Pascal-based tool due to its custom 6-byte float.
Try to ask your calculator to print the most detailed form. If it fails, throw it away and use a most exact tool to verify, or, if your task is really to get the same result, figure out more details about its processing.

Number of decimal digits to show

How to change the number of decimal digits?
Changing the format Matlab can show only 4 (if short) or 15 (if long). But I want exactly 3 digits to show.
To elaborate on Hamataro's answer, you could also use roundn function to round to a specific decimal precision, e.g.: roundn(1.23456789,-3) will yield 1.235. However, Matlab will still display the result in either of the formats you have mentioned, i.e 1.2350 if format is set to short, and 1.235000000000000 if format is set for long.
Alternatively, if you use sprintf, you can use the %g formatting option to display only a set number of digits, regardless of where the decimal point is. sprintf('%0.3g',1.23456789) yields 1.23; sprintf('%0.3g',12.3456789) yields 12.3
You can either use sprintf or do *
var2 = round(var1*1000)/1000