I'm in trouble. Decimal numbers don't come when I do math - numbers

def call_result(label_result, n1, n2, n3):
num1 = (n1.get())
num2 = (n2.get())
num3 = (n3.get())
num4 = 100
num5 = 12
num6 = 2
main1 = float(int(num1))*float(int(num2)) / 12 / 2 / 100
main2 = float(int(num1)) / float(int(num3))
main3 = float(int(main2))+float(int(main1))
label_result.config(text="Result = %d" % float(main3))
return
I'm in trouble. Decimal numbers don't come when I do math
The answer is supposed to be 20.84 but this code produces 20.

The problem is due to the conversion between int and float
Assuming the three nums are strings you should just convert them to float directly and skip the intermediate step.
Moreover when you convert from float to int and then back to float you lost the decimal data because int does not store it.
For example
main2 = 23.34 # main2 is a float with value 23.34
main2 = int(main2) # main2 is now an int with value 23
# Remember int (short for integer) cannot store floating point values and will truncate the number to make an integer
main2 = float(main2) # main2 is now a float but the value is 23.0 because you lost the precision when it was converted to int
I recommend doing something like this
def call_result(label_result, n1, n2, n3):
num1 = float(n1.get())
num2 = float(n2.get())
num3 = float(n3.get())
num4 = 100
num5 = 12
num6 = 2
main1 = num1 * num2 / 12 / 2 / 100
main2 = num1 / num3
main3 = main2 + main1
label_result.config(text="Result = %.2f" % main3)
return
Another thing to take care of is the format specifier you use for the string. %d refers to an integer while %f is for floats. Even if main3 was a float you would have ended up with an integer in your string.
%.2f will truncate a float to a precision of 2 decimal places
"%.2f" % 123.45678 becomes 123.45
Take a look at the string formatting guide here

Related

How to count digits in BigDecimal?

I’m dealing with BigDecimal in Java and I need to make 2 check against BigDecimal fields in my DTO:
Number of digits of full part (before point) < 15
Total number of
digits < 32 including scale (zeros after point)
What is the best way to implement it? I extremely don’t want toBigInteger().toString() and .toString()
I think this will work.
BigDecimal d = new BigDecimal("921229392299229.2922929292920000");
int fractionCount = d.scale();
System.out.println(fractionCount);
int wholeCount = (int) (Math.ceil(Math.log10(d.longValue())));
System.out.println(wholeCount);
I did some testing of the above method vs using indexOf and subtracting lengths of strings. The above seems to be signficantly faster if my testing methodology is reasonable. Here is how I tested it.
Random r = new Random(29);
int nRuns = 1_000_000;
// create a list of 1 million BigDecimals
List<BigDecimal> testData = new ArrayList<>();
for (int j = 0; j < nRuns; j++) {
String wholePart = r.ints(r.nextInt(15) + 1, 0, 10).mapToObj(
String::valueOf).collect(Collectors.joining());
String fractionalPart = r.ints(r.nextInt(31) + 1, 0, 10).mapToObj(
String::valueOf).collect(Collectors.joining());
BigDecimal d = new BigDecimal(wholePart + "." + fractionalPart);
testData.add(d);
}
long start = System.nanoTime();
// Using math
for (BigDecimal d : testData) {
int fractionCount = d.scale();
int wholeCount = (int) (Math.ceil(Math.log10(d.longValue())));
}
long time = System.nanoTime() - start;
System.out.println(time / 1_000_000.);
start = System.nanoTime();
//Using strings
for (BigDecimal d : testData) {
String sd = d.toPlainString();
int n = sd.indexOf(".");
int m = sd.length() - n - 1;
}
time = System.nanoTime() - start;
System.out.println(time / 1_000_000.);
}

How to convert from 8byte hex to real

I'm currently working on a file converter. I've never done anything using binary file reading before. There are many converters available for this file type (gdsII to text), but none in swift that I can find.
I've gotten all the other data types working (2byte int, 4byte int), but I'm really struggling with the real data type.
From a spec document :
http://www.cnf.cornell.edu/cnf_spie9.html
Real numbers are not represented in IEEE format. A floating point number is made up of three parts: the sign, the exponent, and the mantissa. The value of the number is defined to be (mantissa) (16) (exponent). If "S" is the sign bit, "E" is exponent bits, and "M" are mantissa bits then an 8-byte real number has the format
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM
MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The exponent is in "excess 64" notation; that is, the 7-bit field shows a number that is 64 greater than the actual exponent. The mantissa is always a positive fraction greater than or equal to 1/16 and less than 1. For an 8-byte real, the mantissa is in bits 8 to 63. The decimal point of the binary mantissa is just to the left of bit 8. Bit 8 represents the value 1/2, bit 9 represents 1/4, and so on.
I've tried implementing something similar to what I've seen in python or Perl but each language has features that swift doesn't have, also the type conversions get very confusing.
This is one method I tried, based on Perl. Doesn't seem to get the right value. Bitwise math is new to me.
var sgn = 1.0
let andSgn = 0x8000000000000000 & bytes8_test
if( andSgn > 0) { sgn = -1.0 }
// var sgn = -1 if 0x8000000000000000 & num else 1
let manta = bytes8_test & 0x00ffffffffffffff
let exp = (bytes8_test >> 56) & 0x7f
let powBase = sgn * Double(manta)
let expPow = (4.0 * (Double(exp) - 64.0) - 56.0)
var testReal = pow( powBase , expPow )
Another I tried:
let bitArrayDecode = decodeBitArray(bitArray: bitArray)
let valueArray = calcValueOfArray(bitArray: bitArrayDecode)
var exponent:Int16
//calculate exponent
if(negative){
exponent = valueArray - 192
} else {
exponent = valueArray - 64
}
//calculate mantessa
var mantissa = 0.0
//sgn = -1 if 0x8000000000000000 & num else 1
//mant = num & 0x00ffffffffffffff
//exp = (num >> 56) & 0x7f
//return math.ldexp(sgn * mant, 4 * (exp - 64) - 56)
for index in 0...7 {
//let mantaByte = bytes8_1st[index]
//mantissa += Double(mantaByte) / pow(256.0, Double(index))
let bit = pow(2.0, Double(7-index))
let scaleBit = pow(2.0, Double( index ))
var mantab = (8.0 * Double( bytes8_1st[1] & UInt8(bit)))/(bit*scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[2] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[3] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
}
let real = mantissa * pow(16.0, Double(exponent))
UPDATE:
The following part seems to work for the exponent. Returns -9 for the data set I'm working with. Which is what I expect.
var exp = Int16((bytes8 >> 56) & 0x7f)
exp = exp - 65 //change from excess 64
print(exp)
var sgnVal = 0x8000000000000000 & bytes8
var sgn = 1.0
if(sgnVal == 1){
sgn = -1.0
}
For the mantissa though I can't get the calculation correct some how.
The data set:
3d 68 db 8b ac 71 0c b4
38 6d f3 7f 67 5e f6 ec
I think it should return 1e-9 for exponent and 0.0001
The closest I've gotten real Double 0.0000000000034907316148746757
var bytes7 = Array<UInt8>()
for (index, by) in data.enumerated(){
if(index < 4) {
bytes7.append(by[0])
bytes7.append(by[1])
}
}
for index in 0...7 {
mantissa += Double(bytes7[index]) / (pow(256.0, Double(index) + 1.0 ))
}
var real = mantissa * pow(16.0, Double(exp));
print(mantissa)
END OF UPDATE.
Also doesn't seem to produce the correct values. This one was based on a C file.
If anyone can help me out with an English explanation of what the spec means, or any pointers on what to do I would really appreciate it.
Thanks!
According to the doc, this code returns the 8-byte Real data as Double.
extension Data {
func readUInt64BE(_ offset: Int) -> UInt64 {
var value: UInt64 = 0
_ = Swift.withUnsafeMutableBytes(of: &value) {bytes in
copyBytes(to: bytes, from: offset..<offset+8)
}
return value.bigEndian
}
func readReal64(_ offset: Int) -> Double {
let bitPattern = readUInt64BE(offset)
let sign: FloatingPointSign = (bitPattern & 0x80000000_00000000) != 0 ? .minus: .plus
let exponent = (Int((bitPattern >> 56) & 0x00000000_0000007F)-64) * 4 - 56
let significand = Double(bitPattern & 0x00FFFFFF_FFFFFFFF)
let result = Double(sign: sign, exponent: exponent, significand: significand)
return result
}
}
Usage:
//Two 8-byte Real data taken from the example in the doc
let data = Data([
//1.0000000000000E-03
0x3e, 0x41, 0x89, 0x37, 0x4b, 0xc6, 0xa7, 0xef,
//1.0000000000000E-09
0x39, 0x44, 0xb8, 0x2f, 0xa0, 0x9b, 0x5a, 0x54,
])
let real1 = data.readReal64(0)
let real2 = data.readReal64(8)
print(real1, real2) //->0.001 1e-09
Another example from "UPDATE":
//0.0001 in "UPDATE"
let data = Data([0x3d, 0x68, 0xdb, 0x8b, 0xac, 0x71, 0x0c, 0xb4, 0x38, 0x6d, 0xf3, 0x7f, 0x67, 0x5e, 0xf6, 0xec])
let real = data.readReal64(0)
print(real) //->0.0001
Please remember that Double has only 52-bit significand (mantissa), so this code loses some significant bits in the original 8-byte Real. I'm not sure that can be an issue or not.

What is the swift code to apply Round Up or Down Sum Amount?

I try on Xcode - Playground.
This is my code. Beginner.
========
import UIKit
var num1 : Double = 0.055 // Stock Price
var num2 : Double = 18 // Lots
var num3 : Double = 1000 // Share Per Lots
var sum1 : Double = num1 * num2 * num3 // Gross Share Price
var sum5 : Double = sum1 * (0.03/100) // Clearing Charges // Answer Playground Return is " 0.297 "
My Questions is the "sum5" I want answer round up and display " 0.30 "
It is possible in swift code ?
Thanks.
You can get it this way:
var roundOfSum5 : Double = Double(round(100 * sum5)/100) //0.3
Regarding to that answer
If you need to round to a specific place, then you multply by pow(10.0, numberOfPlaces), round, and then divide by pow(10, numberOfPlaces). In your case the number of places is 2.0:
let numberOfPlaces = 2.0
let multiplier = pow(10.0, numberOfPlaces)
let rounded = round(sum5 * multiplier) / multiplier
print(rounded) // 0.3
If you have a number like sum5 = 0.3465 and you want to round to the third place after the decimal you can use 3.0 for numberOfPlaces and get as result 0.347

Scaling Up a Number

How do I scale a number up to the nearest ten, hundred, thousand, etc...
Ex.
num = 11 round up to 20
num = 15 round up to 20
num = 115 round up to 200
num = 4334 round up to 5000
I guess this formula might work? Unless you have more examples to show.
power = floor(log10(n))
result = (floor(n/(10^power)) + 1) * 10^power
import math
exp = math.log10(num)
exp = math.floor(exp)
out = math.ceil(num/10**exp)
out = out * 10**exp
Convert the number to a decimal (i.e. 11 goes to 1.1, 115 goes to 1.15), then take the ceiling of the number, then multiply it back. Example:
public static int roundByScale(int toRound) {
int scale = (int)Math.pow(10.0, Math.floor(Math.log10(toRound)));
double dec = toRound / scale;
int roundDec = (int)Math.ceil(dec);
return roundDec * scale;
}
In this case, if you input 15, it will be divided by 10 to become 1.5, then rounded up to 2, then the method will return 2 * 10 which is 20.
public static int ceilingHighestPlaceValue(int toCeil)
{
int placeValue = Math.Pow(10,toCeil.ToString().Length()-1);
double temp = toCeil / placeValue;
return= ceil(temp) * placeValue;
}

"Nearly divisible"

I want to check if a floating point value is "nearly" a multiple of 32. E.g. 64.1 is "nearly" divisible by 32, and so is 63.9.
Right now I'm doing this:
#define NEARLY_DIVISIBLE 0.1f
float offset = fmodf( val, 32.0f ) ;
if( offset < NEARLY_DIVISIBLE )
{
// its near from above
}
// if it was 63.9, then the remainder would be large, so add some then and check again
else if( fmodf( val + 2*NEARLY_DIVISIBLE, 32.0f ) < NEARLY_DIVISIBLE )
{
// its near from below
}
Got a better way to do this?
well, you could cut out the second fmodf by just subtracting 32 one more time to get the mod from below.
if( offset < NEARLY_DIVISIBLE )
{
// it's near from above
}
else if( offset-32.0f>-1*NEARLY_DIVISIBLE)
{
// it's near from below
}
In a standard-compliant C implementation, one would use the remainder function instead of fmod:
#define NEARLY_DIVISIBLE 0.1f
float offset = remainderf(val, 32.0f);
if (fabsf(offset) < NEARLY_DIVISIBLE) {
// Stuff
}
If one is on a non-compliant platform (MSVC++, for example), then remainder isn't available, sadly. I think that fastmultiplication's answer is quite reasonable in that case.
You mention that you have to test near-divisibility with 32. The following theory ought to hold true for near-divisibility testing against powers of two:
#define THRESHOLD 0.11
int nearly_divisible(float f) {
// printf(" %f\n", (a - (float)((long) a)));
register long l1, l2;
l1 = (long) (f + THRESHOLD);
l2 = (long) f;
return !(l1 & 31) && (l2 & 31 ? 1 : f - (float) l2 <= THRESHOLD);
}
What we're doing is coercing the float, and float + THRESHOLD to long.
f (long) f (long) (f + THRESHOLD)
63.9 63 64
64 64 64
64.1 64 64
Now we test if (long) f is divisible with 32. Just check the lower five bits, if they are all set to zero, the number is divisible by 32. This leads to a series of false positives: 64.2 to 64.8, when converted to long, are also 64, and would pass the first test. So, we check if the difference between their truncated form and f is less than or equal to THRESHOLD.
This, too, has a problem: f - (float) l2 <= THRESHOLD would hold true for 64 and 64.1, but not for 63.9. So, we add an exception for numbers less than 64 (which, when incremented by THRESHOLD and subsequently coerced to long -- note that the test under discussion has to be inclusive with the first test -- is divisible by 32), by specifying that the lower 5 bits are not zero. This will hold true for 63 (1000000 - 1 == 1 11111).
A combination of these three tests would indicate whether the number is divisible by 32 or not. I hope this is clear, please forgive my weird English.
I just tested the extensibility to other powers of three -- the following program prints numbers between 383.5 and 388.4 that are divisible by 128.
#include <stdio.h>
#define THRESHOLD 0.11
int main(void) {
int nearly_divisible(float);
int i;
float f = 383.5;
for (i=0; i<50; i++) {
printf("%6.1f %s\n", f, (nearly_divisible(f) ? "true" : "false"));
f += 0.1;
}
return 0;
}
int nearly_divisible(float f) {
// printf(" %f\n", (a - (float)((long) a)));
register long l1, l2;
l1 = (long) (f + THRESHOLD);
l2 = (long) f;
return !(l1 & 127) && (l2 & 127 ? 1 : f - (float) l2 <= THRESHOLD);
}
Seems to work well so far!
I think it's right:
bool nearlyDivisible(float num,float div){
float f = num % div;
if(f>div/2.0f){
f=f-div;
}
f=f>0?f:0.0f-f;
return f<0.1f;
}
For what I gather you want to detect if a number is nearly divisible by other, right?
I'd do something like this:
#define NEARLY_DIVISIBLE 0.1f
bool IsNearlyDivisible(float n1, float n2)
{
float remainder = (fmodf(n1, n2) / n2);
remainder = remainder < 0f ? -remainder : remainder;
remainder = remainder > 0.5f ? 1 - remainder : remainder;
return (remainder <= NEARLY_DIVISIBLE);
}
Why wouldn't you just divide by 32, then round and take the difference between the rounded number and the actual result?
Something like (forgive the untested/pseudo code, no time to lookup):
#define NEARLY_DIVISIBLE 0.1f
float result = val / 32.0f;
float nearest_int = nearbyintf(result);
float difference = abs(result - nearest_int);
if( difference < NEARLY_DIVISIBLE )
{
// It's nearly divisible
}
If you still wanted to do checks from above and below, you could remove the abs, and check to see if the difference is >0 or <0.
This is without uing the fmodf twice.
int main(void)
{
#define NEARLY_DIVISIBLE 0.1f
#define DIVISOR 32.0f
#define ARRAY_SIZE 4
double test_var1[ARRAY_SIZE] = {63.9,64.1,65,63.8};
int i = 54;
double rest;
for(i=0;i<ARRAY_SIZE;i++)
{
rest = fmod(test_var1[i] ,DIVISOR);
if(rest < NEARLY_DIVISIBLE)
{
printf("Number %f max %f larger than a factor of the divisor:%f\n",test_var1[i],NEARLY_DIVISIBLE,DIVISOR);
}
else if( -(rest-DIVISOR) < NEARLY_DIVISIBLE)
{
printf("Number %f max %f less than a factor of the divisor:%f\n",test_var1[i],NEARLY_DIVISIBLE,DIVISOR);
}
}
return 0;
}