From a computation in Mathematica I got a huge matrix with numbered variables of the form a[1],...,a[100] in some of the entries. I would like to import this matrix as a template to matlab and then substitute random numbers (normally distributed) in place of the variables. I am completely unfamiliar with the support of symbolic variables in Matlab and am not sure whether it supports indexed symbolic variables. I would need some function that searches for the a[k] and replaces them with a random number.
In Mathematica I have matrices square matrices of length 2^n where which get more and more sparse as n grows and depend on 5*n (yet symbolic) variables a[k]. For n=2 the matrix is not yet sparse at all and looks like (in Mathematica-Code):
{{a[3] + a[3], a[7] - I a[8], a[10], I a[8]},
{I a[8], +a[6], I a[5], -I a[9] - a[8]},
{a[7] + I a[8], +a[2], I a[5], -a[7]},
{I a[8], a[2], a[2] + I a[15], -a[8]}}
There exists a script ToMatlabwhich converts the Mathematica notation for matrices to the Matlab notation. I have basically all freedom in renaming the variables as it is most suitable for use in Matlab. Now I would like to create a function in Matlab which returns this exact matrix (for fixed n would be sufficient for now, so the matrix is really fixed) and replaces the a[k] with a normally distributed random number.
Assuming you have a cell array of strings (much bigger than this simple example):
import = {'a1','a2';'a2','a4'};
Then you can replace 100 values with normally distributed random numbers as follows (obviously you will want to replace constants etc with the values you need):
newMatrix = zeros(size(import));
% generate 100 random numbers:
mean = 123.45;
stdDev = 21.0;
N = 100;
randVals = randn(1,N) * stdDev + mean;
for ii=1:N
indx = find(ismember(import, sprintf('a%d',ii)));
newMatrix(indx) = randVals(ii);
end
The new values will be in the matrix newMatrix
This is not super efficient; but it may be a start, and it may get other answers flowing (if you can confirm that this does indeed do what you intend - still not 100% sure I understand your question).
Related
Does anybody know an efficent way to retrieve such a result?
I could of course use rand(n,1) and then replace by iterating over the array values with zeros until the number of zeros is sufficient. (as written above atleast X% zeros it can also be more but not less)
I would prefer a completly random distribution but also a uniform distribution would be fine. (So I have no real clue how the distribution effects the result)
(Currently using MATLAB 2017a)
Use A=rand(n); to generate random vector, then use num = randi([n*proc,n]); to figure out how many of the numbers should be changed for 0 (proc is the minimum fraction of numbers which should be 0). Substitute for 0 with A(1:num)=0; and then shuffle the array with B = A(randperm(n));.
In total:
A=rand(n);
proc = 0.3;
num = randi([n*proc,n]);
A(1:num)=0;
B = A(randperm(n));
I have a 151-by-151 matrix A. It's a correlation matrix, so there are 1s on the main diagonal and repeated values above and below the main diagonal. Each row/column represents a person.
For a given integer n I will seek to reduce the size of the matrix by kicking people out, such that I am left with a n-by-n correlation matrix that minimises the total sum of the elements. In addition to obtaining the abbreviated matrix, I also need to know the row number of the people who should be booted out of the original matrix (or their column number - they'll be the same number).
As a starting point I take A = tril(A), which will remove redundant off-diagonal elements from the correlation matrix.
So, if n = 4 and we have the hypothetical 5-by-5 matrix above, it's very clear that person 5 should be kicked out of the matrix, since that person is contributing a lot of very high correlations.
It's also clear that person 1 should not be kicked out, since that person contributes a lot of negative correlations, and thus brings down the sum of the matrix elements.
I understand that sum(A(:)) will sum everything in the matrix. However, I'm very unclear about how to search for the minimum possible answer.
I noticed a similar question Finding sub-matrix with minimum elementwise sum, which has a brute force solution as the accepted answer. While that answer works fine there it's impractical for a 151-by-151 matrix.
EDIT: I had thought of iterating, but I don't think that truly minimizes the sum of elements in the reduced matrix. Below I have a 4-by-4 correlation matrix in bold, with sums of rows and columns on the edges. It's apparent that with n = 2 the optimal matrix is the 2-by-2 identity matrix involving Persons 1 and 4, but according to the iterative scheme I would have kicked out Person 1 in the first phase of iteration, and so the algorithm makes a solution that is not optimal. I wrote a program that always generated optimal solutions, and it works well when n or k are small, but when trying to make an optimal 75-by-75 matrix from a 151-by-151 matrix I realised my program would take billions of years to terminate.
I vaguely recalled that sometimes these n choose k problems can be resolved with dynamic programming approaches that avoid recomputing things, but I can't work out how to solve this, and nor did googling enlighten me.
I'm willing to sacrifice precision for speed if there's no other option, or the best program will take more than a week to generate a precise solution. However, I'm happy to let a program run for up to a week if it will generate a precise solution.
If it's not possible for a program to optimise the matrix within an reasonable timeframe, then I would accept an answer that explains why n choose k tasks of this particular sort can't be resolved within reasonable timeframes.
This is an approximate solution using a genetic algorithm.
I started with your test case:
data_points = 10; % How many data points will be generated for each person, in order to create the correlation matrix.
num_people = 25; % Number of people initially.
to_keep = 13; % Number of people to be kept in the correlation matrix.
to_drop = num_people - to_keep; % Number of people to drop from the correlation matrix.
num_comparisons = 100; % Number of times to compare the iterative and optimization techniques.
for j = 1:data_points
rand_dat(j,:) = 1 + 2.*randn(num_people,1); % Generate random data.
end
A = corr(rand_dat);
then I defined the functions you need to evolve the genetic algorithm:
function individuals = user1205901individuals(nvars, FitnessFcn, gaoptions, num_people)
individuals = zeros(num_people,gaoptions.PopulationSize);
for cnt=1:gaoptions.PopulationSize
individuals(:,cnt)=randperm(num_people);
end
individuals = individuals(1:nvars,:)';
is the individual generation function.
function fitness = user1205901fitness(ind, A)
fitness = sum(sum(A(ind,ind)));
is the fitness evaluation function
function offspring = user1205901mutations(parents, options, nvars, FitnessFcn, state, thisScore, thisPopulation, num_people)
offspring=zeros(length(parents),nvars);
for cnt=1:length(parents)
original = thisPopulation(parents(cnt),:);
extraneus = setdiff(1:num_people, original);
original(fix(rand()*nvars)+1) = extraneus(fix(rand()*(num_people-nvars))+1);
offspring(cnt,:)=original;
end
is the function to mutate an individual
function children = user1205901crossover(parents, options, nvars, FitnessFcn, unused, thisPopulation)
children=zeros(length(parents)/2,nvars);
cnt = 1;
for cnt1=1:2:length(parents)
cnt2=cnt1+1;
male = thisPopulation(parents(cnt1),:);
female = thisPopulation(parents(cnt2),:);
child = union(male, female);
child = child(randperm(length(child)));
child = child(1:nvars);
children(cnt,:)=child;
cnt = cnt + 1;
end
is the function to generate a new individual coupling two parents.
At this point you can define your problem:
gaproblem2.fitnessfcn=#(idx)user1205901fitness(idx,A)
gaproblem2.nvars = to_keep
gaproblem2.options = gaoptions()
gaproblem2.options.PopulationSize=40
gaproblem2.options.EliteCount=10
gaproblem2.options.CrossoverFraction=0.1
gaproblem2.options.StallGenLimit=inf
gaproblem2.options.CreationFcn= #(nvars,FitnessFcn,gaoptions)user1205901individuals(nvars,FitnessFcn,gaoptions,num_people)
gaproblem2.options.CrossoverFcn= #(parents,options,nvars,FitnessFcn,unused,thisPopulation)user1205901crossover(parents,options,nvars,FitnessFcn,unused,thisPopulation)
gaproblem2.options.MutationFcn=#(parents, options, nvars, FitnessFcn, state, thisScore, thisPopulation) user1205901mutations(parents, options, nvars, FitnessFcn, state, thisScore, thisPopulation, num_people)
gaproblem2.options.Vectorized='off'
open the genetic algorithm tool
gatool
from the File menu select Import Problem... and choose gaproblem2 in the window that opens.
Now, run the tool and wait for the iterations to stop.
The gatool enables you to change hundreds of parameters, so you can trade speed for precision in the selected output.
The resulting vector is the list of indices that you have to keep in the original matrix so A(garesults.x,garesults.x) is the matrix with only the desired persons.
If I have understood you problem statement, you have a N x N matrix M (which happens to be a correlation matrix), and you wish to find for integer n where 2 <= n < N, a n x n matrix m which minimises the sum over all elements of m which I denote f(m)?
In Matlab it is fairly easy and fast to obtain a sub-matrix of a matrix (see for example Removing rows and columns from matrix in Matlab), and the function f is relatively inexpensive to evaluate for n = 151. So why can't you implement an algorithm that solves this backwards dynamically in a program as below where I have sketched out the pseudocode:
function reduceM(M, n){
m = M
for (ii = N to n+1) {
for (jj = 1 to ii) {
val(jj) = f(m) where mhas column and row jj removed, f(X) being summation over all elements of X
}
JJ(ii) = jj s.t. val(jj) is smallest
m = m updated by removing column and row JJ(ii)
}
}
In the end you end up with an m of dimension n which is the solution to your problem and a vector JJ which contains the indices removed at each iteration (you should easily be able to convert these back to indices applicable to the full matrix M)
There are several approaches to finding an approximate solution (eg. quadratic programming on relaxed problem or greedy search), but finding the exact solution is an NP-hard problem.
Disclaimer: I'm not an expert on binary quadratic programming, and you may want to consult the academic literature for more sophisticated algorithms.
Mathematically equivalent formulation:
Your problem is equivalent to:
For some symmetric, positive semi-definite matrix S
minimize (over vector x) x'*S*x
subject to 0 <= x(i) <= 1 for all i
sum(x)==n
x(i) is either 1 or 0 for all i
This is a quadratic programming problem where the vector x is restricted to taking only binary values. Quadratic programming where the domain is restricted to a set of discrete values is called mixed integer quadratic programming (MIQP). The binary version is sometimes called Binary Quadratic Programming (BQP). The last restriction, that x is binary, makes the problem substantially more difficult; it destroys the problem's convexity!
Quick and dirty approach to finding an approximate answer:
If you don't need a precise solution, something to play around with might be a relaxed version of the problem: drop the binary constraint. If you drop the constraint that x(i) is either 1 or 0 for all i, then the problem becomes a trivial convex optimization problem and can be solved nearly instantaneously (eg. by Matlab's quadprog). You could try removing entries that, on the relaxed problem, quadprog assigns the lowest values in the x vector, but this does not truly solve the original problem!
Note also that the relaxed problem gives you a lower bound on the optimal value of the original problem. If your discretized version of the solution to the relaxed problem leads to a value for the objective function close to the lower bound, there may be a sense in which this ad-hoc solution can't be that far off from the true solution.
To solve the relaxed problem, you might try something like:
% k is number of observations to drop
n = size(S, 1);
Aeq = ones(1,n)
beq = n-k;
[x_relax, f_relax] = quadprog(S, zeros(n, 1), [], [], Aeq, beq, zeros(n, 1), ones(n, 1));
f_relax = f_relax * 2; % Quadprog solves .5 * x' * S * x... so mult by 2
temp = sort(x_relax);
cutoff = temp(k);
x_approx = ones(n, 1);
x_approx(x_relax <= cutoff) = 0;
f_approx = x_approx' * S * x_approx;
I'm curious how good x_approx is? This doesn't solve your problem, but it might not be horrible! Note that f_relax is a lower bound on the solution to the original problem.
Software to solve your exact problem
You should check out this link and go down to the section on Mixed Integer Quadratic Programming (MIQP). It looks to me that Gurobi can solve problems of your type. Another list of solvers is here.
Working on a suggestion from Matthew Gunn and also some advice at the Gurobi forums, I came up with the following function. It seems to work pretty well.
I will award it the answer, but if someone can come up with code that works better I'll remove the tick from this answer and place it on their answer instead.
function [ values ] = the_optimal_method( CM , num_to_keep)
%the_iterative_method Takes correlation matrix CM and number to keep, returns list of people who should be kicked out
N = size(CM,1);
clear model;
names = strseq('x',[1:N]);
model.varnames = names;
model.Q = sparse(CM); % Gurobi needs a sparse matrix as input
model.A = sparse(ones(1,N));
model.obj = zeros(1,N);
model.rhs = num_to_keep;
model.sense = '=';
model.vtype = 'B';
gurobi_write(model, 'qp.mps');
results = gurobi(model);
values = results.x;
end
In Matlab, how can I split a random data into two matrices, for example: X(i) is a random vector, where i=1:100, every data symbol is formed from four bits, where x(1) and x(2) are the MSB(Most Significant Bits), x(3) and x(4) are the LSB(Least Significant Bits). I want to split them to get a new matrices y1(for the MSB) and y2(for the LSB).
EDIT
Here is some example code but for some reason it does not seem to work,
M=16;
N=10;
c=randi([0 M-1],1,N);
xx=dec2bin(c);
for k = 1:N-1
for j= 1:4
y1(k)=xx(k);
y1(k+1)=xx(k+1);
y2(k+2)= xx(k+2);
y2(k+3)= xx(k+3);
end
end
The code you wrote seems to have some issues. I think some of the problems is based on a misunderstanding of matlab. I will write a short list of some issue here:
1) There is no string class in matlab. Instead there is char arrays. Further, Matlab does not use pointers or reference in the same ways as Java or c++. This means that you cannot have a vector with char arrays as you have there. This also mean y1 and y2 must be a matlab cell or a matrix to store the data.
2) If c is a vector, then xx will be a matrix and size(xx) == [length(c),dec2bin(max(x))] So to say, each string of binary values is a row and every row is exactly large enough for the the largest string to fit, eg.
a = [13,257];
b = dec2bin(a);
where b is a 2x9 matrix since b needs at least nine bits. So to your problem. I will vectorize the solution and also use the extra agument in dec2bin to lock the number of bits to 4. Try this,
function [msBit, lsBit] = test()
M=16;
N=10;
if M>16
error('M must not be greater than 4 bits');
end
c=randi([0 M-1],1,N);
xx = dec2bin(c,4);
disp xx
disp(xx)
disp ' '
% Take the 2 most significant bits from every row
msBit = xx(:,1:2);
% Take the 2 least significant bits from every row
lsBit = xx(:,3:4);
disp msb
disp(msBit);
disp ' '
disp lsb
disp(lsBit);
It is of course possible to work with int8 as well, then we need to use the bitwise operaton functions. This is more difficult and the result will of course be an int. So 1100 will be represented by 12. This does not seem to be what you are after though, so I will not do this here.
Hope it works and good luck!
I have the following function:
I have to generate 2000 random numbers from this function and then make a histogram.
then I have to determine how many of them is greater that 2 with P(X>2).
this is my function:
%function [ output_args ] = Weibullverdeling( X )
%UNTITLED Summary of this function goes here
% Detailed explanation goes here
for i=1:2000
% x= rand*1000;
%x=ceil(x);
x=i;
Y(i) = 3*(log(x))^(6/5);
X(i)=x;
end
plot(X,Y)
and it gives me the following image:
how can I possibly make it to tell me how many values Do i Have more than 2?
Very simple:
>> Y_greater_than_2 = Y(Y>2);
>> size(Y_greater_than_2)
ans =
1 1998
So that's 1998 values out of 2000 that are greater than 2.
EDIT
If you want to find the values between two other values, say between 1 and 4, you need to do something like:
>> Y_between = Y(Y>=1 & Y<=4);
>> size(Y_between)
ans =
1 2
This is what I think:
for i=1:2000
x=rand(1);
Y(i) = 3*(log(x))^(6/5);
X(i)=x;
end
plot(X,Y)
U is a uniform random variable from which you can get the X. So you need to use rand function in MATLAB.
After which you implement:
size(Y(Y>2),2);
You can implement the code directly (here k is your root, n is number of data points, y is the highest number of distribution, x is smallest number of distribution and lambda the lambda in your equation):
X=(log(x+rand(1,n).*(y-x)).*lambda).^(1/k);
result=numel(X(X>2));
Lets split it and explain it detailed:
You want the k-th root of a number:
number.^(1/k)
you want the natural logarithmic of a number:
log(number)
you want to multiply sth.:
numberA.*numberB
you want to get lets say 1000 random numbers between x and y:
(x+rand(1,1000).*(y-x))
you want to combine all of that:
x= lower_bound;
y= upper_bound;
n= No_Of_data;
lambda=wavelength; %my guess
k= No_of_the_root;
X=(log(x+rand(1,n).*(y-x)).*lambda).^(1/k);
So you just have to insert your x,y,n,lambda and k
and then check
bigger_2 = X(X>2);
which would return only the values bigger than 2 and if you want the number of elements bigger than 2
No_bigger_2=numel(bigger_2);
I'm going to go with the assumption that what you've presented is supposed to be a random variate generation algorithm based on inversion, and that you want real-valued (not complex) solutions so you've omitted a negative sign on the logarithm. If those assumptions are correct, there's no need to simulate to get your answer.
Under the stated assumptions, your formula is the inverse of the complementary cumulative distribution function (CCDF). It's complementary because smaller values of U give larger values of X, and vice-versa. Solve the (corrected) formula for U. Using the values from your Matlab implementation:
X = 3 * (-log(U))^(6/5)
X / 3 = (-log(U))^(6/5)
-log(U) = (X / 3)^(5/6)
U = exp(-((X / 3)^(5/6)))
Since this is the CCDF, plugging in a value for X gives the probability (or proportion) of outcomes greater than X. Solving for X=2 yields 0.49, i.e., 49% of your outcomes should be greater than 2.
Make suitable adjustments if lambda is inside the radical, but the algebra leading to solution is similar. Unless I messed up my arithmetic, the proportion would then be 55.22%.
If you still are required to simulate this, knowing the analytical answer should help you confirm the correctness of your simulation.
Using MATLAB,
Imagine a Nx6 array of numbers which represent N segments with 3+3=6 initial and end point coordinates.
Assume I have a function Calc_Dist( Segment_1, Segment_2 ) that takes as input two 1x6 arrays, and that after some operations returns a scalar, namely the minimal euclidean distance between these two segments.
I want to calculate the pairwise minimal distance between all N segments of my list, but would like to avoid a double loop to do so.
I cannot wrap my head around the documentation of the bsxfun function of MATLAB, so I cannot make this work. For the sake of a minimal example (the distance calculation is obviously not correct):
function scalar = calc_dist( segment_1, segment_2 )
scalar = sum( segment_1 + segment_2 )
end
and the main
Segments = rand( 1500, 6 )
Pairwise_Distance_Matrix = bsxfun( #calc_dist, segments, segments' )
Is there any way to do this, or am I forced to use double loops ?
Thank you for any suggestion
I think you need pdist rather than bsxfun. pdist can be used in two different ways, the second of which is applicable to your problem:
With built-in distance functions, supplied as strings, such as 'euclidean', 'hamming' etc.
With a custom distance function, a handle to which you supply.
In the second case, the distance function
must be of the form
function D2 = distfun(XI, XJ),
taking as arguments a 1-by-N vector XI containing a single row of X, an
M2-by-N matrix XJ containing multiple rows of X, and returning an
M2-by-1 vector of distances D2, whose Jth element is the distance
between the observations XI and XJ(J,:).
Although the documentation doesn't tell, it's very likely that the second way is not as efficient as the first (a double loop might even be faster, who knows), but you can use it. You would need to define your function so that it fulfills the stated condition. With your example function it's easy: for this part you'd use bsxfun:
function scalar = calc_dist( segment_1, segment_2 )
scalar = sum(bsxfun(#plus, segment_1, segment_2), 2);
end
Note also that
pdist works with rows (not columns), which is what you need.
pdist reduces operations by exploiting the properties that any distance function must have. Namely, the distance of an element to itself is known to be zero; and the distance for each pair can be computed just once thanks to symmetry. If you want to arrange the output in the form of a matrix, use squareform.
So, after your actual distance function has been modified appropriately (which may be the hard part), use:
distances = squareform(pdist(segments, #calc_dist));
For example:
N = 4;
segments = rand(N,6);
distances = squareform(pdist(segments, #calc_dist));
produces
distances =
0 6.1492 7.0886 5.5016
6.1492 0 6.8559 5.2688
7.0886 6.8559 0 6.2082
5.5016 5.2688 6.2082 0
Unfortunately I don't see any "smarter" (i.e. read faster) solution than the double loop. For speed consideration I'd organize the points as a 6×N array, not the other way, because column access is way faster than row access in MATLAB.
So:
N = 150000;
Segments = rand(6, N);
Pairwise_Distance_Matrix = Inf(N, N);
for i = 1:(N-1)
for j = (i+1):N
Pairwise_Distance_Matrix(i,j) = calc_dist(Segments(:,i), Segments(:,j));
end;
end;
Minimum_Pairwise_Distance = min(min(Pairwise_Distance_Matrix));
Contrary to common wisdom, explicit loops are faster now in MATLAB compared to the likes of arrayfun, cellfun or structfun; bsxfun beats everything else in terms of speed, but it doesn't apply to your case.