I have a Matlab program that outputs some binary variables based on some constraints.
As an example with three n=3 bits, {x_1 x_2 x_3, x_4 x_5 x_6, x_7 x_8 x_9} my program will output all these bits based on the constraints.
At this point, I have no objective function.
However, the goal of the objective function is to minimize the total Hamming distance (HD) between some of the n bits pairs.
Say I want to minimize HD (x_1 x_2 x_3 vs x_4 x_5 x_6) + (x_1 x_2 x_3 vs x_7 x_8 x_9)
Needless to say, n can vary as can the number of pairs compared for HD.
How do I perform this with intlinprog? I am not finding this helpful. A little bit of direction would do. Do I need to change my A,b,Aeq,etc?
There may be better implementations for this type of objective, but I like to go back to basics to understand what is going on. A simple approach is to think about what the Hamming distance means for a pair of binary variables. The Hamming distance will be zero if the variables have the same value (0 and 0, or 1 and 1), and the Hamming distance will be 1 if the variables don't have the same value.
Assume you have two binary variables v1 and v2. Create another variable Z and add constraints:
Z >= V1 - V2
Z >= V2 - V1
Now Z must be greater than or equal to the Hamming distance between V1 and V2. So if we minimise Z we will minimise the Hamming distance. The generalisation to multiple pairs of variables is obvious - create a variable like Z for all pairs of variables like V1 and V2, and then minimise the sum of those Z variables.
Related
I have a binary vector V, in which each entry describes success (1) or failure (0) in the relevant trial out of a whole session.
(the length of the vector denotes the number of trials in the session).
I can easily calculate the success rate of the session (by taking the mean of the vector i.e. (sum(V)/length(V))).
However I also need to know the variance or std of each session.
In order to calculate that, is it OK to use the Matlab std function (i.e. to take std(V)/length(V))?
Or, should I use something which is specifically suited for the binomial distribution?
Is there a Matlab std (or variance) function which is specific for a "success/failure" distribution?
Thanks
If you satisfy the assumptions of the Binomial distribution,
a fixed number of n independent Bernoulli trials,
each with constant success probability p,
then I'm not sure that is necessary, since the parameters n and p are available from your data.
Note that we model number of successes (in n trials) as a random variable distributed with the Binomial(n,p) distribution.
n = length(V);
p = mean(V); % equivalently, sum(V)/length(V)
% the mean is the maximum likelihood estimator (MLE) for p
% note: need large n or replication to get true p
Then the standard deviation of the number of successes in n independent Bernoulli trials with constant success probability p is just sqrt(n*p*(1-p)).
Of course you can assess this from your data if you have multiple samples. Note this is different from std(V). In your data formatting, it would require having multiple vectors, V1, V2, V2, etc. (replication), then the sample standard deviation of the number of successes would obtained from the following.
% Given V1, V2, V3 sets of Bernoulli trials
std([sum(V1) sum(V2) sum(V3)])
If you already know your parameters: n, p
You can obtain it easily enough.
n = 10;
p = 0.65;
pd = makedist('Binomial',n, p)
std(pd) % 1.5083
or
sqrt(n*p*(1-p)) % 1.5083
as discussed earlier.
Does the standard deviation increase with n ?
The OP has asked:
Something is bothering me.. if std = sqrt(n*p*(1-p)), then it increases with n. Shoudn't the std decrease when n increases?
Confirmation & Derivation:
Definitions:
Then we know that
Then just from definitions of expectation and variance we can show the variance (similarly for standard deviation if you add the square root) increases with n.
Since the square root is a non-decreasing function, we know the same relationship holds for the standard deviation.
I have a set of points describing a closed curve in the complex plane, call it Z = [z_1, ..., z_N]. I'd like to interpolate this curve, and since it's periodic, trigonometric interpolation seemed a natural choice (especially because of its increased accuracy). By performing the FFT, we obtain the Fourier coefficients:
F = fft(Z);
At this point, we could get Z back by the formula (where 1i is the imaginary unit, and we use (k-1)*(n-1) because MATLAB indexing starts at 1)
N
Z(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), 1 <= n <= N.
k=1
My question
Is there any reason why n must be an integer? Presumably, if we treat n as any real number between 1 and N, we will just get more points on the interpolated curve. Is this true? For example, if we wanted to double the number of points, could we not set
N
Z_new(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), with n = 1, 1.5, 2, 2.5, ..., N-1, N-0.5, N
k=1
?
The new points are of course just subject to some interpolation error, but they'll be fairly accurate, right? The reason I'm asking this question is because this method is not working for me. When I try to do this, I get a garbled mess of points that makes no sense.
(By the way, I know that I could use the interpft() command, but I'd like to add points only in certain areas of the curve, for example between z_a and z_b)
The point is when n is integer, you have some primary functions which are orthogonal and can be as a basis for the space. When, n is not integer, The exponential functions in the formula, are not orthogonal. Hence, the expression of a function based on these non-orthogonal basis is not meaningful as much as you expected.
For orthogonality case you can see the following as an example (from here). As you can check, you can find two n_1 and n_2 which are not integer, the following integrals are not zero any more, and they are not orthogonal.
I'm building up on my preivous question because there is a further issue.
I have fitted in Matlab a normal distribution to my data vector: PD = fitdist(data,'normal'). Now I have a new data point coming in (e.g. x = 0.5) and I would like to calculate its probability.
Using cdf(PD,x) will not work because it gives the probability that the point is smaller or equal to x (but not exactly x). Using pdf(PD,x) gives just the densitiy but not the probability and so it can be greater than one.
How can I calculate the probability?
If the distribution is continuous then the probability of any point x is 0, almost by definition of continuous distribution. If the distribution is discrete and, furthermore, the support of the distribution is a subset of the set of integers, then for any integer x its probability is
cdf(PD,x) - cdf(PD,x-1)
More generally, for any random variable X which takes on integer values, the probability mass function f(x) and the cumulative distribution F(x) are related by
f(x) = F(x) - F(x-1)
The right hand side can be interpreted as a discrete derivative, so this is a direct analog of the fact that in the continuous case the pdf is the derivative of the cdf.
I'm not sure if matlab has a more direct way to get at the probability mass function in your situation than going through the cdf like that.
In the continuous case, your question doesn't make a lot of sense since, as I said above, the probability is 0. Non-zero probability in this case is something that attaches to intervals rather than individual points. You still might want to ask for the probability of getting a value near x -- but then you have to decide on what you mean by "near". For example, if x is an integer then you might want to know the probability of getting a value that rounds to x. That would be:
cdf(PD, x + 0.5) - cdf(PD, x - 0.5)
Let's say you have a random variable X that follows the normal distribution with mean mu and standard deviation s.
Let F be the cumulative distribution function for the normal distribution with mean mu and standard deviation s. The probability the random variableX falls between a and b, that is P(a < X <= b) = F(b) - F(a).
In Matlab code:
P_a_b = normcdf(b, mu, s) - normcdf(a, mu, s);
Note: observe that the probability X is exactly equal to 0.5 (or any specific value) is zero! A range of outcomes will have positive probability, but an insufficient sum of individual outcomes will have probability zero.
I have the following equation:
I want to do a exponential curve fitting using MATLAB for the above equation, where y = f(u,a). y is my output while (u,a) are my inputs. I want to find the coefficients A,B for a set of provided data.
I know how to do this for simple polynomials by defining states. As an example, if states= (ones(size(u)), u u.^2), this will give me L+Mu+Nu^2, with L, M and N being regression coefficients.
However, this is not the case for the above equation. How could I do this in MATLAB?
Building on what #eigenchris said, simply take the natural logarithm (log in MATLAB) of both sides of the equation. If we do this, we would in fact be linearizing the equation in log space. In other words, given your original equation:
We get:
However, this isn't exactly polynomial regression. This is more of a least squares fitting of your points. Specifically, what you would do is given a set of y and set pair of (u,a) points, you would build a system of equations and solve for this system via least squares. In other words, given the set y = (y_0, y_1, y_2,...y_N), and (u,a) = ((u_0, a_0), (u_1, a_1), ..., (u_N, a_N)), where N is the number of points that you have, you would build your system of equations like so:
This can be written in matrix form:
To solve for A and B, you simply need to find the least-squares solution. You can see that it's in the form of:
Y = AX
To solve for X, we use what is called the pseudoinverse. As such:
X = A^{*} * Y
A^{*} is the pseudoinverse. This can eloquently be done in MATLAB using the \ or mldivide operator. All you have to do is build a vector of y values with the log taken, as well as building the matrix of u and a values. Therefore, if your points (u,a) are stored in U and A respectively, as well as the values of y stored in Y, you would simply do this:
x = [u.^2 a.^3] \ log(y);
x(1) will contain the coefficient for A, while x(2) will contain the coefficient for B. As A. Donda has noted in his answer (which I embarrassingly forgot about), the values of A and B are obtained assuming that the errors with respect to the exact curve you are trying to fit to are normally (Gaussian) distributed with a constant variance. The errors also need to be additive. If this is not the case, then your parameters achieved may not represent the best fit possible.
See this Wikipedia page for more details on what assumptions least-squares fitting takes:
http://en.wikipedia.org/wiki/Least_squares#Least_squares.2C_regression_analysis_and_statistics
One approach is to use a linear regression of log(y) with respect to u² and a³:
Assuming that u, a, and y are column vectors of the same length:
AB = [u .^ 2, a .^ 3] \ log(y)
After this, AB(1) is the fit value for A and AB(2) is the fit value for B. The computation uses Matlab's mldivide operator; an alternative would be to use the pseudo-inverse.
The fit values found this way are Maximum Likelihood estimates of the parameters under the assumption that deviations from the exact equation are constant-variance normally distributed errors additive to A u² + B a³. If the actual source of deviations differs from this, these estimates may not be optimal.
Look at the following plot and ignore the solid lines please (just look at the dotted/dashed ones).
For each curve, g is between [0, 255] (thus always positive), concave, bijective.
I know from the process that lies behind the measures, that by increasing V, the corresponding curve flattens.
The different curves result when varying V. The orange curve at the top is for like V=100, the bottom curve (red/magenta) results for V=180.
I have measured data with a lot more data points in the following form:
T[1] V[1] g[1]
T[2] V[1] g[2]
T[3] V[1] g[3]
... V[1] g[4]
T[N] V[1] g[5]
.......
T[1] V[N] g[1]
T[2] V[N] g[2]
T[3] V[N] g[3]
... V[N] g[4]
T[N] V[N] g[5]
Now I want a regression like this:
g = g(V, T)
which would yield the curve for a fixed V-value:
g = g(T), V=Vfix
Which regression-funktion in MATLAB do you think would work out the best way?
And how to assume a "model" here?
I only know (from the process itself AND obviously from the plots), that its some sort of linear curve at the beginning, pass over into a logarithmic curve, but I dont know how the value of V inferfers with it!?
Thanks a lot in advance: for any advice..
#bjoern, for each fixed V, it seems that your curve is concave and only has positive values. Therefore, my first choice is to assume that Y=A X^r. The easiest way to estimate this is to apply log in both sides to get the linear regression log Y = log A + r log X (you probably will find 0<r<1). Therefore, for each value of V, I would use the function regress in matlab applied to the values log Y and log X in order to estimate the parameters A and r. This function is called Cobb-Douglas and it is very useful in economics: http://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function.
For most curves, it seems that the effect of V is well behaved, but the behavior of the blue curve, which is very strange. I would say that in general the effect of V is to translate the points.
If the behavior of V is really linear, maybe you can estimate Y=A V X^r. Therefore, you have to estimate logY = log A + log V + r log X. In this case, your dependent variable is log Y and your independent variables log X and log V.
In both cases, I think that the function regress of matlab does not automatically include the constant of the regression (A for us). So remember to include a vector of ones with the size of your sample as an independent variable as well,
Furthermore, if you really want to test if the behavior of V is linear, just estimate
logY = log A + slog V + r log X ehich is equivalent to Y=A V^s X^r
I hope it helps.