After Polyspace code check I am getting "conversion from int16 to unsigned int16 may overflow".
uint16 lData = 0x00u;
sint16 AnalogInputValue;
lData = (uint16)AnalogInputValue; => This line causes Polyspace error
Should the type cast do the job ? According to Polyspace no :)
The following two lines do the same thing:
lData = AnalogInputValue;
lData = (uint16)AnalogInputValue;
Why? The target of the assignment lData is of type uint16, therefore the value stored in the variable AnalogInputValue will have to be converted to uint16 in either case. The variable AnalogInputValue, however, is of type sint16.
The warning comes from the following fact: Variables of type uint16 can hold values in the range 0..65535. But, variables of type sint16 can typically hold values in the range -32768..32767. Therefore, if AnalogInputValue happens to hold a value in the range -32768..-1, then this value can not be represented by an uint16.
Therefore, before doing the assignment, you might add some code around it that checks that AnalogInputValue is not negative. Which means, it holds a value from 0..32767. All these values can be represented in uint16. And, for the other case, namely that the check reveals that AnalogInputValue happens to be negative, you have to find some acceptable solution.
There is one potential third scenario here: You are 100% sure that AnalogInputValue will never hold a negative value, but the logic is too complex for Polyspace to deduce that fact, or the data comes from an external source (which seems to be the case here, as the value is called AnalogInputValue). Then, adding an assertion before the assignment can be used as a means to instruct Polyspace that it shall make this assumption.
You first need to assure that AnalogInputValue does not contain any negative numbers before you type cast it as uint16. If you do ot do that then you risk to loose data in the type cast.
Related
I use GNU Octave, version 6.4.0. Is there any command for getting size of usual data classes, at least for integral data types? Really I want to get max possible value for data type of a matrix which is related to an image. For example for an uint8 image such I it must return 256 for argument class(I). I am looking for a builtin command and do not want to write a switch myself.
The intmax() function does what you want for integer types. Either intmax("uint8") or intmax(a) where a is of type uint8 (or any other integer type).
https://octave.sourceforge.io/octave/function/intmax.html
A similar function realmax() exists for floating point types.
The value is absolute integer, not a floating point to be doubted, also, it is not about an overflow since a double value can hold until 2^1024.
fprintf('%f',realmax)
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
The problem I am facing in nchoosek function that it doesn't produce exact values
fprintf('%f\n',nchoosek(55,24));
2488589544741302.000000
While it is a percentage error of 2 regarding that binomian(n,m)=binomial(n-1,m)+binomial(n-1,m-1) as follows
fprintf('%f',nchoosek(55-1,24)+nchoosek(55-1,24-1))
2488589544741301.000000
Ps: The exact value is 2488589544741300
this demo shows
What is wrong with MATLAB?
Your understanding of the realmax function is wrong. It's the maximum value which can be stored, but with such large numbers you have a floating point precision error far above 1. The first integer which can not be stored in a double value is 2^53+1, try 2^53==2^53+1 for a simple example.
If the symbolic toolbox is available, the easiest to implement solution is using it:
>> nchoosek(sym(55),sym(24))
ans =
2488589544741300
There is a difference between something that looks like an integer (55) and something that's actually an integer (in terms of variable type).
The way you're calculating it, your values are stored as floating point (which is what realmax is pointing you to - the largest positive floating point number - check intmax('int64') for the largest possible integer value), so you can get floating point errors. An absolute difference of 2 in a large value is not that unexpected - the actual percentage error is tiny.
Plus, you're using %f in your format string - e.g. asking it to display as floating point.
For nchoosek specifically, from the docs, the output is returned as a nonnegative scalar value, of the same type as inputs n and k, or, if they are different types, of the non-double type (you can only have different input types if one is a double).
In Matlab, when you type a number directly into a function input, it generally defaults to a float. You have to force it to be an integer.
Try instead:
fprintf('%d\n',nchoosek(int64(55),int64(24)));
Note: %d not %f, converting both inputs to specifically integer. The output of nchoosek here should be of type int64.
I don't have access to MATLAB, but since you're obviously okay working with Octave I'll post my observations based on that.
If you look at the Octave source code using edit nchoosek or here you'll see that the equation for calculating the binomial coefficient is quite simple:
A = round (prod ((v-k+1:v)./(1:k)));
As you can see, there are k divisions, each with the possibility of introducing some small error. The next line attempts to be helpful and warn you of the possibility of loss of precision:
if (A*2*k*eps >= 0.5)
warning ("nchoosek", "nchoosek: possible loss of precision");
So, if I may slightly modify your final question, what is wrong with Octave? I would say nothing is wrong. The authors obviously knew of the possibility of imprecision and included a check to warn users when that possibility arises. So the function is working as intended. If you require greater precision for your application than the built-in function provides, it looks as though you'll need to code (or find) something that calculates the intermediate results with greater precision.
I have discovered an inconsistency for uint64 when using vectors in Matlab. It seems as an array of uint64 is not exact for all 64 bits. This did not give the output I expected,
p=uint64([0;0]);
p(1)=13286492335502040542
p =
13286492335502041088
0
However
q = uint64(13286492335502040542)
q =
13286492335502040542
does. It is also working with
p(1)=uint64(13286492335502040542)
p =
13286492335502040542
0
Working with unsigned integers one expect a special behaviour and usually also perfect precision. This seems weird and even a bit uncanny. I do not see this problem with smaller numbers. Maybe anyone knows more? I do not expect this to be an unknown problem, so I guess there must be some explanation to it. I would be good to know why this happen and when, to be able to avoid it. As usual this kind of issues is mentioned nowhere in the documentation.
Matlab 2014a, windows 7.
EDIT
It is worth mentioning that I can see the same behaviour when defining arrays directly.
p=uint64([13286492335502040542;13286492335502040543])
p =
13286492335502041088
13286492335502041088
This is the root to why I ask this question. I have hard to see workaround for this case.
While it might be surprising, this is a floating point precision issue. :-)
The thing is, all numeric literals are by default of type double in MATLAB; that's why:
13286492335502040542 == 13286492335502041088
will return true; the floating point representation in double precision of 13286492335502040542 is 13286492335502041088. Since p has the class uint64, all assignments done to it will cast the right-hand-side to its class.
On another hand, the uint64(13286492335502040542) "call" will be optimized by the MATLAB interpreter to avoid the overhead of calling the uint64 function for the double argument, and will convert the literal directly to its unsigned integer representation (which is exact).
On a third hand [sic], the function call optimization doesn't apply to
p = uint64([13286492335502040542;13286492335502040543])
because the argument of uint64 is not a literal, but the result of an expression, i.e. the result of the vertcat operator applied to two double operands. In this case the MATLAB interpreter is not smart enough to figure out that the two function calls should "commute" (concatenation of uint should be the same as uint of concatenation), so it evaluates the concatenation (which gives an array of equal double because FP precision), then converts the two similar double values to uint64.
TLDR: the difference between
p = uint64(13286492335502040542);
and
u = 13286492335502040542; p = uint64(u);
is a side effect of function call optimization.
Matlab, unless told otherwise reads numbers as double, then casts to the relevant datatype. The Matlab double datatype allows for 51 bits for the floating point fraction, giving the possibility to store 52 bit integers without loss of prepossession (mantissa). Notice that 13286492335502041088 is just 13286492335502040543 with the last 12 bits set to zero.
the solution as you said, is to convert the literals directly uint64(13286492335502040543).
p=uint64([13286492335502040542;13286492335502040543]) does not work because it creates a double array and then converts it to uint64
This issue is mentioned in the uint64 documentation, under 'More About', although it doesn't mention that laterals are read as doubles unless otherwise specified.
I agree this seems weird and I don't have an explanation. I do have a workaround:
p=[uint64(13286492335502040542);uint64(13286492335502040543)]
i.e., cast the separate values to uint64s.
Typically in Matlab colours are represented by three element vectors of RGB intensity values, with precision uint8 (range 0 - 255) or double (range 0 - 1). Matlabs functions such as imshow work with either representation making both easy to use in a program.
It is equally easy however to introduce a bug when assigning colour values from a matrix of one type, to that of another (because the value is converted silently, but not re-scaled to the new range). Having just spent a number of hours finding such a bug, I would like to make sure it is never introduced again.
How do I make Matlab display a warning when type conversion takes place?
Ideally it would only be when the conversion is between double and uint8. It should also be difficult to deactivate (i.e. the option is not reset when loading a workspace, or when matlab crashes).
A possible solution is to define your own uint8 function that casts to uint8 and issues a warning if some value has been truncated.
You should place this function in a folder where it shadows the builtin uint8 funciton. For example, your user folder is a good choice, as it usually appears the first in path.
Or, as noted by Sam Roberts, if you want this function to be called only when converting from double into uint8 (not when converting from any other type into uint8), put it in a folder named #double within your path.
function y = uint8(x)
y = builtin('uint8', x);
if any(x(:)>255) || any(x(:)<0)
warning('MATLAB:castTruncation', 'Values truncated during conversion to uint8')
end
The warning is on by default. You can switch it on or off with the commands warning('on','MATLAB:castTruncation') and warning('off','MATLAB:castTruncation') (thanks to CitizenInsane for the suggestion).
The find function within matlab returns indices where the given locigal argument evaluates true.
Thus I'm wondering, why the return values (for the indices) are of type double and not uint32 or uint64 like the biggest index into a matrix could be.
Another strange thing which might be connected to that here is, that running
[~,max_num_of_elem]=computer
returns the maximal number of elements allowed for a matrix in the variable max_num_of_elem which is also of type double.
I can only guess, but probably because a wide range of functions only support double. Run
setdiff(methods('double'), methods('uint32'))
to see what functions are defined for double and not for uint32 on your version of MATLAB.
Also there is the overflow issue with integer data types in MATLAB that can introduce some hard to detect bugs.