Legacy code - looking for a function which emulates qbasic decimal to text - porting

I could sit down a write this, but in the interests of not reinventing the wheel, I wanted to check that someone hasn't already done this.
I have to migrate over a little legacy tool which generates a text file containing a table of numeric values generated by a small tool. It was written many years back in DOS QBasic.
The only problem with the task is that QBasic had quite a few pecularities in decimal to text conversion. Lots of small exceptions.
The resultant file is imported into an old machine which works perfectly with the QBasic generated file, but when I pass it 6 or 7 digit precision decimals the results are not correct. QBasic output when writing decimals varies from 7 down to 3 digits of decimal depending on the whole number part and also generates decimals in the 0.0000E+1 format if there is no whole number part and there are zeros after the decimal point.
Has anyone seen a collection of functions which behave the same way as qbasic? Language doesn't matter. Googling hasn't turned up anything so far.

Related

how to change variable type to accept numbers more than 100 quintillion

I am a beginner in scratch and working on a simple program that speaks out numbers in multiples of 10 and the highest number it would speak using text-to-speech tool is 100 quintillion. After this it would start speaking in exponential format.
How can I change the datatype of variable in a way it accepts numbers more than 100 quintillion and wont change it to exponential format?
How can I change the datatype of variable
Well, first of all, you can't change the datatype of a variable in Scratch
accepts numbers more than 100 quintillion and wont change it to exponential format
The way Scratch works is that it does that and there is no lifehack I could find to fix this, but you could try deep searching the Scratch Forums
My first thought would be to use a base64 counting system as opposed to base 10, which humans use.
You are currently storing your number in base 10, where you are limited by the numbers 0-9; whenever you use up all of these digits, you must use another character space. However, since scratch doesn't restrict variables to only numbers, we can make use of non-numerical characters too.
Base 64 uses capital A to capital Z, then uses lowercase A to lowercase Z, uses all digits 0-9, then uses the symbols + and /. Luckily, Scratch supports all of these characters!

Language for working on big numbers

I am working on a task that consists different operations on very big numbers. Example : Multiplying two 50 digit numbers. That big-sized numbers cannot be handled using C.
Can someone suggest me some programming language that can handle operations on such types of big numbers without using any special type of libraries, so that I can learn that language to implement my algorithm.
Python3 can work on very large numbers (you can say it has almost no limit) and that's automatic.
https://stackoverflow.com/a/7604998/3156085
You can try it yourself by entering very large numbers in python shell.
BigDecimal class from Java can work with large numbers as you need, without using any extra library.

Format number of decimals in cells containing complex numbers

I have a spreadsheet containing both real and complex numbers. Some of them are like
0.48686
while others are like
4.85609+j3.24184
I am trying to round them, in order to have only two decimal places.
While Format > Cells works on the real numbers, it doesn't on the complex ones, because LibreOffice interprets them as a string.
I have looked up in google, but couldn't find anyone with the same problem.
I wanted to know if there was anyone who had already developed a macro for that, before trying to do it myself.
You could round the complex number by combining the ROUND() and the COMPLEX() functions:
A3 has the formula
=COMPLEX(A1;A2)
while A4 has
=COMPLEX( ROUND(A1;2) ; ROUND(A2;2) )
(adapting a solution from a german ms office forum to OOo.Calc / LO Calc)

Are there any real-world uses for converting numbers between different bases?

I know that we need to convert decimal, octal, and hexadecimal into binary, but I am confused about conversion of decimal to octal or octal to hexadecimal or decimal to hexadecimal.
Why and where we need these types of conversion?
Different bases are good for different purposes.
Decimal is obviously what most people know how to deal with, so is good for output of real quantities to end users.
Hex is short and has an even ratio of exactly 2 characters per byte, so it's good for expressing large numbers like SHA1 hashes or private keys and the like in a type-able format, particularly since those numbers don't really represent a quantity, so users don't need to be able to understand them as numbers.
Octal is mostly for legacy reasons -- UNIX file permission codes are traditionally expressed as octal numbers, for example, because three bits per digit corresponds nicely to the three bits per user-category of the UNIX permission encoding scheme.
One sometimes will want to use numbers in one base for a purpose where another base is desired. Thus, the various conversion functions available. In truth, however, my experience is that in practice you almost never convert from one base to another much, except to convert numbers from some non-binary base into binary (in the form of your language of choice's native integral type) and back out into whatever base you need to output. Most of the time one goes from one non-binary base to another is when learning about bases and getting a feel for what numbers in different bases look like, or when debugging using hexadecimal output. Even then, if a computer does it the main method is to convert to binary and then back out, because current computers are just inherently good at dealing with base-2 numbers and not-so-good at anything else.
One important place you see numbers actually stored and operated on in decimal is in some financial applications or others where it's important that "number-of-decimal-place" level precision be preserved. Sometimes fixed-point arithmetic can work for currency, but not always, and if it doesn't using binary-floating-point is a bad idea. Older systems actually had built in support for this in the form of binary-coded-decimal arithmetic. In BCD, each 4 bits acts as a decimal digit, so you give up a chunk of every 4 bits of storage in exchange for maintaining your level of precision in the base-of-choice of the non-computing world.
Oddly enough, there is one common use case for other bases that's a bit hidden. Modern languages with large number support (e.g. Python 2.x's long type or Java's BigInteger and BigDecimal type) will usually store the numbers internally in an array with each element being a digit in some base. Then they implement the math they support on strings of digits of that base. Really efficient bigint implementations may actually use use a base approaching 2^(bits in machine native word size); a base 2^64 number is obviously impossible to usefully output in that form, but doing the calculations in chunks of that size ends up making the best use of space and the CPU. (I don't know if that's the best base; it may be best to use a base of half that number of bits to simplify overflow handling from one digit to the next. It's been awhile since I wrote my own bigint and I never implemented the faster/more-complicated versions of multiplication and division.)
MIME uses hexadecimal system for Quoted Printable encoding (e.g. mail subject in Unicode) abd 64-based system for Base64 encoding.
If your workplace is stuck in IPv4 CIDR - you'll be doing quite a lot of bin -> hex -> decimal conversions managing most of the networking equipment until you get them memorized (or just use some random, simple tool).
Even that usage is a bit few-and-far-between - most businesses just adopt the lazy "/24 everything" approach.
If you do a lot of graphics work - there's the chance you'll want to convert colors between systems and need to convert from hex -> dec... most tools have this built in to the color picker, though.
I suppose there's no practical reason to be able to do other than it's really simple and there's no point not learning how to do it. :)
... unless, for some reason, you're trying to do mantissa binary math in your head.
All of these bases have their uses. Hexadecimal in particular is useful as a shorthand for binary. Every hexadecimal digit is equivalent to 4 bits, so you can write a full 32-bit value as a string of 8 hex digits. Likewise, octal digits are equivalent to 3 bits, and are used frequently as a shorthand for things like Unix file permissions (777 = set read, write, execute bits for user/group/other).
No one base is special--they all have their (obscure) uses. Decimal is special to us because it reflects human experience (10 fingers) but that's really the only reason.
A real world use case: a program prints error code in decimal, to get info from a database or the internet you need the hexadecimal format, because the bits of the error 'number' convey extra info you need to look at it in binary.
I'm there are occasional uses for this. One use case would be a little app that allows user who wants to convert decimal to octal ... like you can with lots of calculators.
But I'm not sure I understand the point of the question. Standard libraries typically don't provide methods like String toOctal(String decimal). Instead, you would normally convert from a decimal String to a primitive integer and then from the primitive integer to (say) an octal String.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.