How does my scientific calculator calculate p/q ratio so accurately while the programming languages can't ? How are they programmed? - calculator

I have seen that my scientific calculator stores 99 digits after decimal. Why doesn't the programming languages use such precision ? Moreover, how can I achieve such precision if I want to ?

This is a good question!
To understand what happens, you must first familiarize yourself with how computers store floating point numbers: http://grouper.ieee.org/groups/754/
Typically programming languages offer two binary representations - one that uses 32 bits and one that uses 64 bits.
If you need more precision, you need a better representation and you can implement a division algorithm to obtain the result with arbitrary precision that you need.
You can take a look at Java’s implementation of BigDecimal

Related

Irrational number representation in computer

We can write a simple Rational Number class using two integers representing A/B with B != 0.
If we want to represent an irrational number class (storing and computing), the first thing came to my mind is to use floating point, which means use IEEE 754 standard (binary fraction). This is because irrational number must be approximated.
Is there another way to write irrational number class other than using binary fraction (whether they conserve memory space or not) ?
I studied jsbeuno's solution using Python: Irrational number representation in any programming language?
He's still using the built-in floating point to store.
This is not homework.
Thank you for your time.
With a cardinality argument, there are much more irrational numbers than rational ones. (and the number of IEEE754 floating point numbers is finite, probably less than 2^64).
You can represent numbers with something else than fractions (e.g. logarithmically).
jsbeuno is storing the number as a base and a radix and using those when doing calcs with other irrational numbers; he's only using the float representation for output.
If you want to get fancier, you can define the base and the radix as rational numbers (with two integers) as described above, or make them themselves irrational numbers.
To make something thoroughly useful, though, you'll end up replicating a symbolic math package.
You can always use symbolic math, where items are stored exactly as they are and calculations are deferred until they can be performed with precision above some threshold.
For example, say you performed two operations on a non-irrational number like 2, one to take the square root and then one to square that. With limited precision, you may get something like:
(√2)²
= 1.414213562²
= 1.999999999
However, storing symbolic math would allow you to store the result of √2 as √2 rather than an approximation of it, then realise that (√x)² is equivalent to x, removing the possibility of error.
Now that obviously involves a more complicated encoding that simple IEEE754 but it's not impossible to achieve.

Really Large Numbers

I'm writing an iPhone application. Say I have a whole number of LENGTH 256, i.e. 94736 has length 5, 3745, has length 4, etc. What kind of data type can fit a number of length 256?
Does it have to be a number? Will you be doing math operations with it? If not, you should just use NSString.
The maximum value for an NSInteger is NSIntegerMax, which in the iphone is 32 bit.
(2.147.483.647)
But you should treat a number on 256 ciphers as a NSString.
(from Foundation Constants Reference)
I think you can use double, but being limited to available precision. Or maybe you can store it as string and write your own functions to manipulate them, such as plus, minus. This could be hard.
You should look at NSDecimalNumber its a wrapper an immutable wrapper for doing arithmetic on numbers on numbers expressed as *mantissa * 10^exponent*, where mantissa mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.
It is not a fit if your requirement of a 256 digit mantissa is a hard requirement. It is a fit if you want to be able to work on numbers on that order.
If you really have need for numbers of that size with perfect precision then look into the GNU MP Bignum Library at http://gmplib.org/. The limit for numbers that GNU MP can support is only limited by available RAM. It is written in C, so easily usable as-is for an iPhone application.
For a school project I worked with numbers of length 256 + . The way I got around it was to build a class to store the numbers as arrays.
For example I would store 345 as [3,4,5]. This way you are only limited to the amount of memory available.
I wrote methods for multiplication, addition and subtraction of positive numbers. Not too hard, and it works well.
I would suggest the same thing if you are looking at doing math with the numbers. Then just implement the functions you need.
This was done in c++ using xcode as compiler.
The maximum number is 10^256 - 1 (10 to the power of 256 minus 1)

Are there any real-world uses for converting numbers between different bases?

I know that we need to convert decimal, octal, and hexadecimal into binary, but I am confused about conversion of decimal to octal or octal to hexadecimal or decimal to hexadecimal.
Why and where we need these types of conversion?
Different bases are good for different purposes.
Decimal is obviously what most people know how to deal with, so is good for output of real quantities to end users.
Hex is short and has an even ratio of exactly 2 characters per byte, so it's good for expressing large numbers like SHA1 hashes or private keys and the like in a type-able format, particularly since those numbers don't really represent a quantity, so users don't need to be able to understand them as numbers.
Octal is mostly for legacy reasons -- UNIX file permission codes are traditionally expressed as octal numbers, for example, because three bits per digit corresponds nicely to the three bits per user-category of the UNIX permission encoding scheme.
One sometimes will want to use numbers in one base for a purpose where another base is desired. Thus, the various conversion functions available. In truth, however, my experience is that in practice you almost never convert from one base to another much, except to convert numbers from some non-binary base into binary (in the form of your language of choice's native integral type) and back out into whatever base you need to output. Most of the time one goes from one non-binary base to another is when learning about bases and getting a feel for what numbers in different bases look like, or when debugging using hexadecimal output. Even then, if a computer does it the main method is to convert to binary and then back out, because current computers are just inherently good at dealing with base-2 numbers and not-so-good at anything else.
One important place you see numbers actually stored and operated on in decimal is in some financial applications or others where it's important that "number-of-decimal-place" level precision be preserved. Sometimes fixed-point arithmetic can work for currency, but not always, and if it doesn't using binary-floating-point is a bad idea. Older systems actually had built in support for this in the form of binary-coded-decimal arithmetic. In BCD, each 4 bits acts as a decimal digit, so you give up a chunk of every 4 bits of storage in exchange for maintaining your level of precision in the base-of-choice of the non-computing world.
Oddly enough, there is one common use case for other bases that's a bit hidden. Modern languages with large number support (e.g. Python 2.x's long type or Java's BigInteger and BigDecimal type) will usually store the numbers internally in an array with each element being a digit in some base. Then they implement the math they support on strings of digits of that base. Really efficient bigint implementations may actually use use a base approaching 2^(bits in machine native word size); a base 2^64 number is obviously impossible to usefully output in that form, but doing the calculations in chunks of that size ends up making the best use of space and the CPU. (I don't know if that's the best base; it may be best to use a base of half that number of bits to simplify overflow handling from one digit to the next. It's been awhile since I wrote my own bigint and I never implemented the faster/more-complicated versions of multiplication and division.)
MIME uses hexadecimal system for Quoted Printable encoding (e.g. mail subject in Unicode) abd 64-based system for Base64 encoding.
If your workplace is stuck in IPv4 CIDR - you'll be doing quite a lot of bin -> hex -> decimal conversions managing most of the networking equipment until you get them memorized (or just use some random, simple tool).
Even that usage is a bit few-and-far-between - most businesses just adopt the lazy "/24 everything" approach.
If you do a lot of graphics work - there's the chance you'll want to convert colors between systems and need to convert from hex -> dec... most tools have this built in to the color picker, though.
I suppose there's no practical reason to be able to do other than it's really simple and there's no point not learning how to do it. :)
... unless, for some reason, you're trying to do mantissa binary math in your head.
All of these bases have their uses. Hexadecimal in particular is useful as a shorthand for binary. Every hexadecimal digit is equivalent to 4 bits, so you can write a full 32-bit value as a string of 8 hex digits. Likewise, octal digits are equivalent to 3 bits, and are used frequently as a shorthand for things like Unix file permissions (777 = set read, write, execute bits for user/group/other).
No one base is special--they all have their (obscure) uses. Decimal is special to us because it reflects human experience (10 fingers) but that's really the only reason.
A real world use case: a program prints error code in decimal, to get info from a database or the internet you need the hexadecimal format, because the bits of the error 'number' convey extra info you need to look at it in binary.
I'm there are occasional uses for this. One use case would be a little app that allows user who wants to convert decimal to octal ... like you can with lots of calculators.
But I'm not sure I understand the point of the question. Standard libraries typically don't provide methods like String toOctal(String decimal). Instead, you would normally convert from a decimal String to a primitive integer and then from the primitive integer to (say) an octal String.

Is it a good idea to use NSDecimalNumber for floating point arithmetics instead of plain double?

I wonder what's the point of NSDecimalNumber. It offers some arithmetics methods, but why should I use NSDecimalNumber and not just double or NSNumber? Did apple take care of some floating point arithmetics uglyness there? Would it make life easier when making heavy use of high precision and big floating point maths?
This all depends or your needs.
It is a trade off between precision, speed and size of data.
If you are writing an accounting application you cannot lose any precision and so might well use NSDecimal number.
Ig you are doing complex numerical analysis the speed could matter and so NSDecimalNumber would be too slow. But even in that case your analysis would look at the precision and errors you could afford and here could be cases where you need more precision that doubles etc give you.
NSNumber is a separate case it is a class cluster to allow storage of C type numbers in other objects and other use in Cocoa.
If your software deals with money, or other non-integer numbers of interest to accountants, you are well advised to use decimal numbers for that (rather than the binary ones that the underlying HW is optimized to process); that's why all sorts of general purpose languages and databases bend over backwards to support decimal non-integer numbers, not just binary ones.
Rounding issues with binary non-integers might easily result in fractions-of-a-cent discrepancies that, at the limit, might even land you in legal trouble, and, more realistically, will be perceived by accountants and others dealing with money &c as errors in your program, no matter how staunchly you may argue otherwise!-)
NSDecimalNumber is a fixed precision (and scale) integer scaled to a certain size to represent fractional numbers. This is a little different from a floating point number (where the point, obviously, floats...)
As an example, say you need to represent money from 0.00 to 999.99, you could store this in an integer from 0 to 99999 as an amount in pennies. The scale (in digits) is 2 and the precision is 5. In a floating point number, with precision 5, and a floating point you could represent from .00001 to 99999, but not 999.999, for example.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.