Is there any freeware tool that can convert fractional values of various numeral systems to each other? - number-systems

I have already searched Google and I am finding no tool to convert fractional values of various numeral system into each other.
Can anyone hive me a hand?
I need a freeware executable for WindowsXP.

WolframAlpha can do it for you, e.g. the query 7/3 in base 5 converts it.
Without a specified operating system, finding tools is just guessing we're targeting the correct OS :).

Related

Language for working on big numbers

I am working on a task that consists different operations on very big numbers. Example : Multiplying two 50 digit numbers. That big-sized numbers cannot be handled using C.
Can someone suggest me some programming language that can handle operations on such types of big numbers without using any special type of libraries, so that I can learn that language to implement my algorithm.
Python3 can work on very large numbers (you can say it has almost no limit) and that's automatic.
https://stackoverflow.com/a/7604998/3156085
You can try it yourself by entering very large numbers in python shell.
BigDecimal class from Java can work with large numbers as you need, without using any extra library.

How probable is that two exact same calculations give different results?

I am currently working on remaking an old invoicing program that was originally written in VB6.
It has two parts, one on an android tablet, the other on a pc. The old database used , stored derived values because there was a chance that the calculations would be incorrect if repeated.
For example if one sold 5 items whose price was 10 euros at 10% discount and a tax value of 23% , it would store the above 4 values but also the result of the calucation of (5 * (10 * 1.23)) * 0.9.
I do not really like having duplicate or derivable information in my database, but the actual sell value must be the same, whether it is viewed on a tablet , or a pc.
So my question is , is there a chance (even the slightest one) that the above calucation (to a three decimal percision) would have different results on different operating systems (such as an android device and a desktop computer) ?
Thanks in advance for any help you can provide
Yes, it's possible. Floating-point arithmetic is always subject to rounding errors and different languages (and architectures) deal with those errors in different ways. There are best practices in dealing with these issues, though I don't consider myself knowledgeable enough to speak to them. But here are a couple of options for you.
Use a data type meant for floating-point arithmetic. For example, VB6 has a Single and Double type for floating point but also a Currency type for accurate decimal math.
Scale your floating-point values to integers and perform your calculations on these integer values. You can even store the results as integers in your DB. The ERP system we use does this and includes a data dictionary that defines how each type was scaled so that it can be "unscaled" before display.
Hope that helps.

Numerical Integral of large numbers in Fortran 90

so I have the following Integral that i need to do numerically:
Int[Exp(0.5*(aCosx + bSinx + cCos2x + dSin2x))] x=0..2Pi
The problem is that the output at any given value of x can be extremely large, e^2000, so larger than I can deal with in double precision.
I havn't had much luck googling for the following, how do you deal with large numbers in fortran, not high precision, i dont care if i know it to beyond double precision, and at the end i'll just be taking the log, but i just need to be able to handle the large numbers untill i can take the log..
Are there integration packes that have the ability to handle arbitrarily large numbers? Mathematica clearly can.. so there must be something like this out there.
Cheers
This is probably an extended comment rather than an answer but here goes anyway ...
As you've already observed Fortran isn't equipped, out of the box, with the facility for handling such large numbers as e^2000. I think you have 3 options.
Use mathematics to reduce your problem to one which does (or a number of related ones which do) fall within the numerical range that your Fortran compiler can compute.
Use Mathematica or one of the other computer algebra systems (eg Maple, SAGE, Maxima). All (I think) of these can be integrated into a Fortran program (with varying degrees of difficulty and integration).
Use a library for high-precision (often called either arbitray-precision or multiple-precision too) arithmetic. Your favourite search engine will turn up a number of these for you, some written in Fortran (and therefore easy to integrate), some written in C/C++ or other languages (and therefore slightly harder to integrate). You might start your search at Lawrence Berkeley or the GNU bignum library.
(Yes I know that I wrote that you have 3 options, but your question suggests that you aren't ready to consider this yet) You could write your own high-/arbitrary-/multiple-precision functions. Fortran provides everything you need to construct such a library, there is a lot of work already done in the field to learn from, and it might be something of interest to you.
In practice it generally makes sense to apply as much mathematics as possible to a problem before resorting to a computer, that process can not only assist in solving the problem but guide your selection or construction of a program to solve what's left of the problem.
I agree with High Peformance Mark that the best option here numerically is to use analytics to scale or simplify the result first.
I will mention that if you do want to brute force it, gfortran (as of 4.6, with the libquadmath library) has support for quadruple precision reals, which you can use by selecting the appropriate kind. As long as your answers (and the intermediate results!) don't get too much bigger than what you're describing, that may work, but it will generally be much slower than double precision.
This requires looking deeper at the problem you are trying to solve and the behavior of the underlying mathematics. To add to the good advice already provided by Mark and Jonathan, consider expanding the exponential and trig functions into Taylor series and truncating to the desired level of precision.
Also, take a step back and ask why you are trying to accomplish by calculating this value. As an example, I recently had to debug why I was getting outlandish results from a property correlation which was calculating vapor pressure of a fluid to see if condensation was occurring. I spent a long time trying to understand what was wrong with the temperature being fed into the correlation until I realized the case causing the error was a simulation of vapor detonation. The problem was not in the numerics but in the logic of checking for condensation during a literal explosion; physically, a condensation check made no sense. The real problem was the code was asking an unnecessary question; it already had the answer.
I highly recommend Forman Acton's Numerical Methods That (Usually) Work and Real Computing Made Real. Both focus on problems like this and suggest techniques to tame ill-mannered computations.

Compilation optimization for iPhone : floating point or fixed point?

I'm building a library for iphone (speex, but i'm sure it will apply to a lot of other libs too) and the make script has an option to use fixed point instead of floating point.
As the iphone ARM processor has the VFP extension and performs very well floating point calculations, do you think it's a better choice to use the fixed point option ?
If someone already benchmarked this and wants to share , i would really thank him.
Well, it depends on the setup of your application, here is some guidelines
First try turning on optimization to 0s (Fastest Smallest)
Turn on Relax IEEE Compliance
If your application can easily process floating point numbers in contiguous memory locations independently, you should look at the ARM NEON intrinsic's and assembly instructions, they can process up to 4 floating point numbers in a single instruction.
If you are already heavily using floating point math, try to switch some of your logic to fixed point (but keep in mind that moving from an NEON register to an integer register results in a full pipeline stall)
If you are already heavily using integer math, try changing some of your logic to floating point math.
Remember to profile before optimization
And above all, better algorithms will always beat micro-optimizations such as the above.
If you are dealing with large blocks of sequential data, NEON is definitely the way to go.
Float or fixed, that's a good question. NEON is somewhat faster dealing with fixed, but I'd keep the native input format since conversions take time and eventually, extra memory.
Even if the lib offers a different output formats as an option, it almost alway means lib-internal conversions. So I guess float is the native one in this case. Stick to it.
Noone prevents you from micro-optimizing better algorithms. And usually, the better the algorithm, the more performance gain can be achieved through micro-optimizations due to the pipelining on modern machines.
I'd stay away from intrinsics though. There are so many posts on the net complaining about intrinsics doing something crazy, especially when dealing with immediate values.
It can and will get very troublesome, and you can hardly optimize anything with intrinsics either.

16-bit float data type

Does anyone have any experience with using the 16-bit floating-point type in an application. This relatively new data type is used in computer graphics. It's defined by several specs: OpenEXR, DirectX and the new IEEE-754 2008 standard.
At WinHEC 2008 Microsoft's Chas Boyd had a presentation evangelizing this data type. (I wasn't there, but I saw the slide deck.) "float-16 is the new byte".
My questions are: is anyone using this data type for anything outside of DirectX textures?
If so, why? What is you application doing?
If so, do you require full IEEE support, including denomals, NaNs and #Inf?
I have encountered this type in the DSP libraries for uclinux, including a software implementation of all major operations. It is very nice for 8 and 16 bit processors, far easier to handle in software than 32 bit or larger types.