assign standard tolerance to isequal funtion Matlab - matlab

I am Using matlab 2012b, I came through a some simple problem when using the function isequal. I have to round the floating values. but, I got some basic errors because of some exceptions.
In my file the values are rounded to nearest integer in most cases. but, they are some exceptions like
if i have a variable value
a = X.4675
it has been rounded to X in many cases but there few cases it has been rounded to X+1.
my task is just to compare and check the equality. In this case it should be true in both X and X+1 cases. hence, I need to modify isequal function with a tolerance of 1.
isequal({b1, b2, b3, b4},{B1, B2, B3, B4})
b1, b2... are values after rounding the original, B1, B2 ....are standard values to camapare. now I want to give a tolerance of 1
B1 = round(b1)|| B1 = round(b1)+1
Note: B1,B2.... values are standard, I need to compare all at a once.

Related

What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?

I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits.
Is the only difference that training vectors y have to be one-hot encoded when using sparse_softmax_cross_entropy_with_logits?
Reading the API, I was unable to find any other difference compared to softmax_cross_entropy_with_logits. But why do we need the extra function then?
Shouldn't softmax_cross_entropy_with_logits produce the same results as sparse_softmax_cross_entropy_with_logits, if it is supplied with one-hot encoded training data/vectors?
Having two different functions is a convenience, as they produce the same result.
The difference is simple:
For sparse_softmax_cross_entropy_with_logits, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range [0, num_classes-1].
For softmax_cross_entropy_with_logits, labels must have the shape [batch_size, num_classes] and dtype float32 or float64.
Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits.
Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.
I would just like to add 2 things to accepted answer that you can also find in TF documentation.
First:
tf.nn.softmax_cross_entropy_with_logits
NOTE: While the classes are mutually exclusive, their probabilities
need not be. All that is required is that each row of labels is a
valid probability distribution. If they are not, the computation of
the gradient will be incorrect.
Second:
tf.nn.sparse_softmax_cross_entropy_with_logits
NOTE: For this operation, the probability of a given label is
considered exclusive. That is, soft classes are not allowed, and the
labels vector must provide a single specific index for the true class
for each row of logits (each minibatch entry).
Both functions computes the same results and sparse_softmax_cross_entropy_with_logits computes the cross entropy directly on the sparse labels instead of converting them with one-hot encoding.
You can verify this by running the following program:
import tensorflow as tf
from random import randint
dims = 8
pos = randint(0, dims - 1)
logits = tf.random_uniform([dims], maxval=3, dtype=tf.float32)
labels = tf.one_hot(pos, dims)
res1 = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=labels)
res2 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=tf.constant(pos))
with tf.Session() as sess:
a, b = sess.run([res1, res2])
print a, b
print a == b
Here I create a random logits vector of length dims and generate one-hot encoded labels (where element in pos is 1 and others are 0).
After that I calculate softmax and sparse softmax and compare their output. Try rerunning it a few times to make sure that it always produce the same output

Multiply two variables in Matlab with vpa - high precision

I want to be sure that two variables, a and b, are multiplied with high precision, i.e., perform the product c of a and b with arbitrary precision (in my case 50 correct decimal digits):
a = vpa(10/3,50);
b = vpa(7/13,50);
c = eval(vpa(vpa(a,50)*vpa(b,50),50)); % I basically want to do just c = a*b;
which gives me
a = 3.3333333333333333333333333333333333333333333333333
b = 0.53846153846153846153846153846153846153846153846154
c = 23.333333333333333333333333333333
Testing
d = eval(vpa(c*13,50))
gives
d = 23.333333333333333333333333333333333333335292490585
which shows that the multiplication to get c was not carried out with 50 significant digits.
What's wrong here, but, more importantly, how do I get a correct result for a*b and for other operations such as exp?
First, should use vpa('7/13',50) or vpa(7,50)/13 to avoid the possibility of losing precision dues to 7/13 being calculated in double precision floating point (I believe that vpa, like sym, tries to guess common constants and rational fractions, but you shouldn't rely on it).
The issue is that while a and b are stored as 50-digit variable precision values, your multiplication is still being performed according to the default value of digits (32). The second argument to vpa only appears to specify the precision of the variable, not any subsequent operations on or with it (the documentation is not particularly helpful in this respect).
One way to accomplish what you want would be:
old_digits = digits(50);
a = vpa('10/3')
b = vpa('7/13')
c = a*b
d = 13*c
digits(old_digits);
Another would be to use exact symbolic expressions for all of the math (potentially more expensive) and then convert the result to 50-digit variable precision at the end:
a = sym('10/3')
b = sym('7/13')
c = a*b
d = vpa(13*c,50)
Both methods return 23.333333333333333333333333333333333333333333333333 for d.

matlab functions about sine curve

I have a question about matlab programming about sine curve.
The question is as below:
Consider the definition: function [s1, s2, sums] = sines(pts,amp,f1,f2). The input, pts, is an integer, but amp, f1, and f2 and are not necessarily integers. Output argument s1 is a row vector whose length (number of elements) equals pts. The elements of s1 are the values of the sine function when it is given equally spaced arguments that start at zero and extend through f1 periods of the sine. (Note that we ask for full periods, so if f1 is an integer, both the first and the last element of s1 will be 0 other than a very small rounding error.) The amplitude of the sine wave equals amp. The vector s2 is the same as s1 except that s2 contains f2 periods. The vector sums is the sum of s1 and s2. If f2 is omitted, then it should be set to a value that is 5% greater than f1. If f1 is omitted also, then it should be set to 100. If amp is not provided, then it should default to 1. Finally, if pts is omitted as well, then it should be set to 1000.
Here is what I am confused: how to define step length pts. I used the following method but it fails to work. Please help me to fix it.
function [s1, s2, sums] = sines(pts,amp,f1,f2)
.................
t = linspace(0, 1, pts);
s1=amp*sin(2*pi*f1*t);
s2=amp*sin(2*pi*f2*t);
Thanks.
As far as the part of the code you are confused this should work for you:
n=pts-1
t=0:n;
s1=amp*sin(2*pi*f1/n*t);
s2=amp*sin(2*pi*f2/n*t);
then you sum s1+s2. You still need to handle the missing input if any.

Is there a built-in way to hash, or digest, or string-serialize arbitrary cell array entries?

Short version:
I'm looking for a built-in hashing function u such that the expressions isequal(u(A), u(B)) and isequal(A, B) always produce the same result, for any values A and B.
(Or, less formally: two values A and B should be equal if and only if their u-transforms are equal.)
Long version:
Some code of mine applies unique(..., 'stable') to an input table X as follows:
[~, IX, IZ] = unique(X, 'stable');
Unfortunately, this code fails if any column X.(j) of X violates the constraint
~iscell(X.(j)) || iscellstr(X.(j))
Granted, this is behavior is as described in the documentation for unique, but it narrows the scope of my code unnecessarily. After all, all I need are the index vectors IX and IZ.
I'd like to get around this restriction by pre-processing X to generate an intermediate table Y in which every column X.(j) of X that violates the constraint above is replaced by one in which every entry has been replaced by a suitable value compatible with unique's limitations. More specifically, I'm looking for some transform u such that, for any column X.(j) of X,
isequal(u(X(i1, j)), u(X(i2, j))) is equivalent to isequal(X(i1, j), X(i2, j)), for any pair of row indices i1 and i2; and
u(X.(j)) is a suitable argument for unique.
(The first condition above can be stated as: two column entries are equal if and only if their u-transforms are equal.)
(FWIW, as far as my application goes, the columns of X may be safely assumed not to contain NaN or <undefined> values.)
There are many possible ways one can envision implementing such a transform u, but I'm not sure how best to go about it in MATLAB.
Through Google, I've found a few 3rd-party functions that may be fit the bill, but if there's any built-in alternative, I'd prefer to go with that.
As noted in this blog post, you can use the undocumented built-in getByteStreamFromArray:
x = {1, 1, 1, 2, 2, 3, 'foo', 'foo', 'bar', 1e7};
% unique(x); % error
if ~(~iscell(x) || iscellstr(x))
y = cellfun(#(c)(char(getByteStreamFromArray(c))), x, 'uniformoutput', false);
end
[vals, inds] = unique(y, 'stable');
xUnique = cellfun(#(c)(getArrayFromByteStream(uint8(c))), vals, 'uniformoutput', false)

Finding values that aren't in an array in matlab/octave [duplicate]

This question already has answers here:
Matlab arrays operation
(3 answers)
Closed 9 years ago.
I have two arrays in matlab/octave a1 is calculated and a2 is given. How can I create a 3rd array
a3 that compares a1 to a2 and shows the values that are missing in a1?
a1=[1,4,5,8,13]
a2=[1,2,3,4,5,6,7,8,9,10,11,12,13]
a3=[3,6,7,9,10,11,12]
Also can this work for a floating point number say if a1=[1,4,5,8.6,13] or would I have to convert a1 to integers only.
Thanks
setdiff returns the elements of one array that aren't in another. This will work with floating-point values, but requires equality.
a3 = setdiff(a2, a1)
function missing = comparray(a1, a2)
% array of numbers that are missing from input
missing = []
% for each element in a2, check if it's in a1
for ii=1:1:length(a2)
num = a2(ii);
deltas = abs(a1 - num);
if min(deltas) ~= 0
missing = [missing, num];
end
end
Floating point numbers can be tricky. To get the above code to work with them, check min(deltas) > 0.001 (or a suitable very small value given the precision of your input numbers). For more information, see here