I just want to know if both these two MiniZinc solver use infinite precision arithmetic by default.
Neither Gecode nor the G12 solvers support infinite precision. Both these solvers and all other MiniZinc solvers that I'm aware of only support floating point arithmetic. This is partially because the MiniZinc compiler does not have support for infinite precision, (see How to obtain an exact infinite-precision representation of rational numbers via a non-standard FlatZinc extension?).
Related
it's clearly that float16 can save bandwidth, but is float16 can save compute cycles while computing transcendental functions, like exp()?
If your hardware has full support for it, not just conversion to float32, then yes, definitely. e.g. on a GPU, or on Intel Alder Lake with AVX-512 enabled, or Sapphire Rapids.
Half-precision floating-point arithmetic on Intel chips. Or apparently on Apple M2 CPUs.
If you can do two 64-byte SIMD vectors of FMAs per clock on a core, you go twice as fast if that's 32 half-precision FMAs per vector instead of 16 single-precision FMAs.
Speed vs. precision tradeoff: only enough for FP16 is needed
Without hardware ALU support for FP16, only by not requiring as much precision because you know you're eventually going to round to fp16. So you'd use polynomial approximations of lower degree, thus fewer FMA operations, even though you're computing with float32.
BTW, exp and log are interesting for floating point because the format itself is build around an exponential representation. So you can do an exponential by converting fp->int and stuffing that integer into the exponent field of an FP bit pattern. Then with the the fractional part of your FP number, you use a polynomial approximation to get the mantissa of the exponent. A log implementation is the reverse: extract the exponent field and use a polynomial approximation of log of the mantissa, over a range like 1.0 to 2.0.
See
Efficient implementation of log2(__m256d) in AVX2
Fastest Implementation of Exponential Function Using AVX
Very fast approximate Logarithm (natural log) function in C++?
vgetmantps vs andpd instructions for getting the mantissa of float
Normally you do want some FP operations, so I don't think it would be worth trying to use only 16-bit integer operations to avoid unpacking to float32 even for exp or log, which are somewhat special and intimately connected with floating point's significand * 2^exponent format, unlike sin/cos/tan or other transcendental functions.
So I think your best bet would normally still be to start by converting fp16 to fp32, if you don't have instructions like AVX-512 FP16 can do actual FP math on it. But you can gain performance from not needing as much precision, since implementing these functions normally involves a speed vs. precision tradeoff.
Thanks in advance
I am working on a Simulink model which involves Floating-point data types. So using the Fixed-Point tool available in Simulink, I am trying to convert my floating-point system to a fixed-point one. I am following the tutorial available here to achieve the conversion.
Link to the tutorial on converting the floating-point system to the fixed point
In the Data type proposing step, I got underflow values for some of the variables. My question is how to convert those underflow values as well in-range. Or can I ignore them and proceed with further steps? In general how to tackle this type of underflow/overflow issue?
Using fixed-point arithmetic can be faster and use less resources than floating-point arithmetic, but a significant disadvantage is that underflow and overflow are not handled gracefully. If you try to detect and recover from these conditions you will lose much of the advantage provided by fixed-point.
In practice, you should select a fixed-point format for your variables that provides enough bits for the integer part (the bits to the left of the radix point) so that overflow cannot occur. This requires careful analysis of your algorithms and the potential ranges of all variables. Your format should also provide enough fraction bits (to the right of the radix point) so that underflows do not cause significant problems with your algorithm.
I am wondering what would be the way to tell Matlab that all computations need to be done and continued with let's say 4 digits.
format long or other formats I think is for showing the results with specific precision, not for their value precision.
Are you hoping to increase performance or save memory by not using double precision? If so, then variable precision arithmetic and vpa (part of the Symbolic Math toolbox) are probably not the right choice. Instead, you might consider fixed point arithmetic. This type of math is mostly used on microcontrollers and architectures with memory address widths of less than 32 bits or that lack dedicated floating point hardware. In Matlab you'll need the Fixed Point Designer toolbox. You can read more about the capabilities here.
Use digits: http://www.mathworks.com/help/symbolic/digits.html
digits(d) sets the current VPA accuracy to d significant decimal digits. The value d must be a positive integer greater than 1 and less than 2^29 + 1.
I have to solve a non linear system of 2 equations with 2 unknowns in MATLAB. I used to solve systems using vpasolve but someone told me that this method wasn't very efficient, that I should not abuse of symbolic programming in MATLAB and that I should rather use fsolve instead. Does this hold true everytime? What are the differences between using fsolve and vpasolve in terms of precision and performance?
Basically that's the question when to use variable precision arithmetic (vpa) vs floating point arithmetic. Floating point arithmetic uses a constant precision, the most common type is a 64bit double which is supported by your cpu, thus it can be executed fast. When you need a higher precision than double offers you, you could switch to higher bit length, but this requires you to know which precision you need. vpa allows you to do this the other way round. Using digits you specify the precision of the result and the symbolic toolbox will do all intermediate steps with a sufficient precision.
An example where fzero produces a significant error:
f=#(x)log(log(log(exp(exp(exp(x+1))))))-exp(1)
vpasolve(f(sym('x')))
fsolve(f,0)
Can someone please explain this?
As I understand it provides less precision..Is it a speed-up that one wishes to get by using it? When is it good to use? Should I use it in Matlab Coder?
Not all the computers in the world use floating-point arithmetic. In particular, many devices which have a connection to the world (such as sensors and the computers which process their data) use fixed-point representations of numbers. Some researchers into algorithms and similar matters also want to use fixed-point numbers. Matlab's fixed-point toolbox allows its users to do fixed-point arithmetic on their PCs, and to write code targeted at execution on devices which implement it.
It's not (necessarily) true that the Matlab fixed-point arithmetic provides less precision, it can be used to provide more precision than IEEE floating-point types.
Is it a speed up ? That's beside the point. (Read on)
When is it good to use ? When you need to use fixed-point arithmetic. I'm not sure anyone would recommend it as a general-purpose replacement for floating-point arithmetic.
Should you use it ? Your question suggests that the answer is almost certainly 'No, you would already know that you ought to be using fixed-point arithmetic.'