Is there any function on Matlab/Octave that randomly picks a value from a list accordingly to a given probability?
For example: we have the vector [1 3 7]. The function I am looking for should pick one of those numbers with probability .25 for 1, .35 for 3 and .4 for 7.
I am trying to implement it myself, but I'd like to know if there is some build-in function for the next time I need something like this.
You are looking for a function statistics toolbox called randsample. It samples k values out of n with replacement (without replacement is not supported). You want to select one value, which can be done as follows:
nSamplesToChoose=1;
weightVector=[0.2 0.5 0.3];%weights some to one so as to represent probability distribution
yourArray=[5 6 7]; %length of the array should be same as the length of weightVector.
chosenSample=randsample(yourArray,nSamplesToChoose,true,weightVector)
P.S. I encourage you to implement this by yourself. You may refer to this question.
What you described is like a Generalized Bernoulli Distribution. So, you can use the Multinomial Distribution to generate this data.
The MATLAB help page is here.
In your case, n=1 and p=[.25 .35 .4].
mnrnd(n,p)
will return a 1 x 3 vector which has only one non-zero element which corresponds to the random variable which should be chosen.
TL;DR Version:
To generate the required output, you can simply do dot([1 3 7], mnrnd(1,[.25 .35 .4]))
Related
I have got a matrix of AirFuelRatio values at certain engine speeds and throttlepositions. (eg. the AFR is 14 at 2500rpm and 60% throttle)
The matrix is now 25x10, and the engine speed ranges from 1200-6000rpm with interval 200rpm, the throttle range from 0.1-1 with interval 0.1.
Say i have measured new values, eg. an AFR of 13.5 at 2138rpm and 74,3% throttle, how do i merge that in the matrix? The matrix closest values are 2000 or 2200rpm and 70 or 80% throttle. Also i don't want new data to replace the older data. How can i make the matrix take this value in and adjust its values to take the new value in account?
Simplified i have the following x-axis values(top row) and 1x4 matrix(below):
2 4 6 8
14 16 18 20
I just measured an AFR value of 15.5 at 3 rpm. If you interpolate the AFR matrix you would've gotten a 15, so this value is out of the ordinary.
I want the matrix to take this data and adjust the other variables to it, ie. average everything so that the more data i put in the more reliable and accurate the matrix becomes. So in the simplified case the matrix would become something like:
2 4 6 8
14.3 16.3 18.2 20.1
So it averages between old and new data. I've read the documentation about concatenation but i believe my problem can't be solved with that function.
EDIT: To clarify my question, the following visual clarification.
The 'matrix' keeps the same size of 5 points whil a new data point is added. It takes the new data in account and adjusts the matrix accordingly. This is what i'm trying to achieve. The more scatterd data i get, the more accurate the matrix becomes. (and yes the green dot in this case would be an outlier, but it explains my case)
Cheers
This is not a matter of simple merge/average. I don't think there's a quick method to do this unless you have simplifying assumptions. What you want is a statistical inference of the underlying trend. I suggest using Gaussian process regression to solve this problem. There's a great MATLAB toolbox by Rasmussen and Williams called GPML. http://www.gaussianprocess.org/gpml/
This sounds more like a data fitting task to me. What you are suggesting is that you have a set of measurements for which you wish to get the best linear fit. Instead of producing a table of data, what you need is a table of values, and then find the best fit to those values. So, for example, I could create a matrix, A, which has all of the recorded values. Let's start with:
A=[2,14;3,15.5;4,16;6,18;8,20];
I now need a matrix of points for the inputs to my fitting curve (which, in this instance, lets assume it is linear, so is the set of values 1 and x)
B=[ones(size(A,1),1), A(:,1)];
We can find the linear fit parameters (where it cuts the y-axis and the gradient) using:
B\A(:,2)
Or, if you want the points that the line goes through for the values of x:
B*(B\A(:,2))
This results in the points:
2,14.1897 3,15.1552 4,16.1207 6,18.0517 8,19.9828
which represents the best fit line through these points.
You can manually extend this to polynomial fitting if you want, or you can use the Matlab function polyfit. To manually extend the process you should use a revised B matrix. You can also produce only a specified set of points in the last line. The complete code would then be:
% Original measurements - could be read in from a file,
% but for this example we will set it to a matrix
% Note that not all tabulated values need to be present
A=[2,14; 3,15.5; 4,16; 5,17; 8,20];
% Now create the polynomial values of x corresponding to
% the data points. Choosing a second order polynomial...
B=[ones(size(A,1),1), A(:,1), A(:,1).^2];
% Find the polynomial coefficients for the best fit curve
coeffs=B\A(:,2);
% Now generate a table of values at specific points
% First define the x-values
tabinds = 2:2:8;
% Then generate the polynomial values of x
tabpolys=[ones(length(tabinds),1), tabinds', (tabinds').^2];
% Finally, multiply by the coefficients found
curve_table = [tabinds', tabpolys*coeffs];
% and display the results
disp(curve_table);
A proof of concept prototype I have to do for my final year project is to implement K-Means Clustering on a big data set and display the results on a graph. I only know object-oriented languages like Java and C# and decided to give MATLAB a try. I notice that with a functional language the approach to solving problems is very different, so I would like some insight on a few things if possible.
Suppose I have the following data set:
raw_data
400.39 513.29 499.99 466.62 396.67
234.78 231.92 215.82 203.93 290.43
15.07 14.08 12.27 13.21 13.15
334.02 328.79 272.2 306.99 347.79
49.88 52.2 66.35 47.69 47.86
732.88 744.62 687.53 699.63 694.98
And I picked row 2 and 4 to be the 2 centroids:
centroids
234.78 231.92 215.82 203.93 290.43 % Centroid 1
334.02 328.79 272.2 306.99 347.79 % Centroid 2
I want to now compute the euclidean distances of each point to each centroid, then assign each point to it's closest centroid and display this on a graph. Let's say I want I want to classify the centroids as blue and green. How can I do this in MATLAB? If this was Java I would initialise each row as an object and add to separate ArrayLists (representing the clusters).
If rows 1, 2 and 3 all belong to the first centroid / cluster, and rows 4, 5 and 6 belong to the second centroid / cluster - how can I classify these to display them as blue or green points on a graph? I am new to MATLAB and really curious about this. Thanks for any help.
(To begin with, Matlab has a flexible distance measuring function, pdist2 and also kmeans implementation, but I'm assuming that you want to build your code from scratch).
In Matlab, you try to implement everything as matrix algebra, without loops over elements.
In your case, if R is the raw_data matrix and C is the centroids matrix,
you can shift the dimension that represents centroid number to the 3rd place by
permC=permute(C,[3 2 1]); Then the bsxfun function allows you to subtract C from R while expanding R's third dimension as necessary: D=bsxfun(#minus,R,permC). Element-wise square followed by summation across columns SqD=sum(D.^2,2) will give you the squared distances of each observation from each centroid. Performing all these operations within a single statement and shifting the third (centroid) dimension back to the 2nd place will look like this:
SqD=permute(sum(bsxfun(#minus,R,permute(C,[3 2 1])).^2,2),[1 3 2])
Picking the centroid of minimal distance is now straightforward: [minDist,minCentroid]=min(SqD,[],2)
If this looks complex, I recommend inspecting the product of each sub-step and reading the help of each command.
Suppose I have a weight matrix W nxm where m is the number of variables and the n is the number of instances. Also I have data matrix X of the same size. I try to find the closest weight vector to each instance in X. However both matrices are so dimensional therefore plain methods are not sufficient enough. I have tried some GPU trick at MATLAB but it does not work well since it was sequential approach that was calculating the closest weight for each instance sequentially. I am now looking for efficient one shot code. That takes all the W and X and find the winner with some MATLAB tricks with possibly some GPU addition. Is there any one that can suggest any code snippet in the MATLAB?
This is the thing that I wrote for sequential
x_in_d = gpuArray(x_in); % take input instance to device
W_d = gpuArray(W); % take weight matrix to device
Dx = W_d - x_in_d(ones(size(W_d,1),1),logical(ones(1,length(x_in_d))));
[d_min,winner] = min(sum((Dx.^2)'));
d_min = gather(d_min); %gather results
winner = gather(winner);
What do you mean by so dimensional? It's just an m x n matrix right?
It would be really helpful if you could provide some sample data, based off your description (which isn't the clearest), here is what I think your data looks like.
weights=
[1 4 2
5 3 1]
data=
[2 5 1
1 2 2]
And you want to figure out which row of weights is closest to the row of data? Which in this case would be the first row of weights for both rows of data.
Please edit your question to clarify what your asking for and consider using some examples.
EDIT:
I like Rody's Dup. Comment, if I am correct, check out: Link Here
I am using PCA to find out which variables in my dataset are redundand due to being highly correlated with other variables. I am using princomp matlab function on the data previously normalized using zscore:
[coeff, PC, eigenvalues] = princomp(zscore(x))
I know that eigenvalues tell me how much variation of the dataset covers every principal component, and that coeff tells me how much of i-th original variable is in the j-th principal component (where i - rows, j - columns).
So I assumed that to find out which variables out of the original dataset are the most important and which are the least I should multiply the coeff matrix by eigenvalues - coeff values represent how much of every variable each component has and eigenvalues tell how important this component is.
So this is my full code:
[coeff, PC, eigenvalues] = princomp(zscore(x));
e = eigenvalues./sum(eigenvalues);
abs(coeff)/e
But this does not really show anything - I tried it on a following set, where variable 1 is fully correlated with variable 2 (v2 = v1 + 2):
v1 v2 v3
1 3 4
2 4 -1
4 6 9
3 5 -2
but the results of my calculations were following:
v1 0.5525
v2 0.5525
v3 0.5264
and this does not really show anything. I would expect the result for variable 2 show that it is far less important than v1 or v3.
Which of my assuptions is wrong?
EDIT I have completely reworked the answer now that I understand which assumptions were wrong.
Before explaining what doesn't work in the OP, let me make sure we'll have the same terminology. In principal component analysis, the goal is to obtain a coordinate transformation that separates the observations well, and that may make it easy to describe the data , i.e. the different multi-dimensional observations, in a lower-dimensional space. Observations are multidimensional when they're made up from multiple measurements. If there are fewer linearly independent observations than there are measurements, we expect at least one of the eigenvalues to be zero, because e.g. two linearly independent observation vectors in a 3D space can be described by a 2D plane.
If we have an array
x = [ 1 3 4
2 4 -1
4 6 9
3 5 -2];
that consists of four observations with three measurements each, princomp(x) will find the lower-dimensional space spanned by the four observations. Since there are two co-dependent measurements, one of the eigenvalues will be near zero, since the space of measurements is only 2D and not 3D, which is probably the result you wanted to find. Indeed, if you inspect the eigenvectors (coeff), you find that the first two components are extremely obviously collinear
coeff = princomp(x)
coeff =
0.10124 0.69982 0.70711
0.10124 0.69982 -0.70711
0.9897 -0.14317 1.1102e-16
Since the first two components are, in fact, pointing in opposite directions, the values of the first two components of the transformed observations are, on their own, meaningless: [1 1 25] is equivalent to [1000 1000 25].
Now, if we want to find out whether any measurements are linearly dependent, and if we really want to use principal components for this, because in real life, measurements my not be perfectly collinear and we are interested in finding good vectors of descriptors for a machine-learning application, it makes a lot more sense to consider the three measurements as "observations", and run princomp(x'). Since there are thus three "observations" only, but four "measurements", the fourth eigenvector will be zero. However, since there are two linearly dependent observations, we're left with only two non-zero eigenvalues:
eigenvalues =
24.263
3.7368
0
0
To find out which of the measurements are so highly correlated (not actually necessary if you use the eigenvector-transformed measurements as input for e.g. machine learning), the best way would be to look at the correlation between the measurements:
corr(x)
ans =
1 1 0.35675
1 1 0.35675
0.35675 0.35675 1
Unsurprisingly, each measurement is perfectly correlated with itself, and v1 is perfectly correlated with v2.
EDIT2
but the eigenvalues tell us which vectors in the new space are most important (cover the most of variation) and also coefficients tell us how much of each variable is in each component. so I assume we can use this data to find out which of the original variables hold the most of variance and thus are most important (and get rid of those that represent small amount)
This works if your observations show very little variance in one measurement variable (e.g. where x = [1 2 3;1 4 22;1 25 -25;1 11 100];, and thus the first variable contributes nothing to the variance). However, with collinear measurements, both vectors hold equivalent information, and contribute equally to the variance. Thus, the eigenvectors (coefficients) are likely to be similar to one another.
In order for #agnieszka's comments to keep making sense, I have left the original points 1-4 of my answer below. Note that #3 was in response to the division of the eigenvectors by the eigenvalues, which to me didn't make a lot of sense.
the vectors should be in rows, not columns (each vector is an
observation).
coeff returns the basis vectors of the principal
components, and its order has little to do with the original input
To see the importance of the principal components, you use eigenvalues/sum(eigenvalues)
If you have two collinear vectors, you can't say that the first is important and the second isn't. How do you know that it shouldn't be the other way around? If you want to test for colinearity, you should check the rank of the array instead, or call unique on normalized (i.e. norm equal to 1) vectors.
I have a question that may be trivial but it's not described anywhere i've looked. I'm studying neural networks and everywhere i look there's some theory and some trivial example with some 0s and 1s as an input. I'm wondering: do i have to put only one value as an input value for one neuron, or can it be a vector of, let's say, 3 values (RGB colour for example)?
The above answers are technically correct, but don't explain the simple truth: there is never a situation where you'd need to give a vector of numbers to a single neuron.
From a practical standpoint this is because (as one of the earlier solutions has shown) you can just have a neuron for each number in a vector and then have all of those be the input to a single neuron. This should get you your desired behavior after training, as the second layer neuron can effectively make use of the entire vector.
From a mathematical standpoint, there is a fundamental theorem of coding theory that states that any vector of numbers can be represented as a single number. Thus, if you really don't want an extra layer of neurons, you could simply encode the RGB values into a single number and input that to the neuron. Though, this coding function would probably make most learning problems more difficult, so I doubt this solution would be worth it in most cases.
To summarize: artificial neural networks are used without giving a vector to an input unit, but lose no computational power because of this.
When dealing with multi-dimensional data, I believe a two layer neural network is said to give better result.
In your case:
R[0..1] => (N1)----\
\
G[0..1] => (N2)-----(N4) => Result[0..1]
/
B[0..1] => (N3)----/
As you can see, the N4 neurone can handle 3 entries.
The [0..1] interval is a convention but a good one imo. That way, you can easily code a set of generic neuron classes that can take an arbitrary number of entries (I had template C++ classes with the number of entries as template parameter personally). So you code the logic of your neurons once, then you toy with the structure of the network and/or combinations of functions within your neurons.
Generally, the input for a single neuron is a value between 0 and 1. That convention is not just for ease of implementation but because normalizing the input values to the same range ensures that each input carries similar weighting. (If you have some images with 8 bit color with pixel values between 0 and 7 and some images with 16 bit color with pixel values between 0 and 255 you probably wouldn't want to favor the 24 bit color images just because the numerical values are higher. Similarly, you will probably want your images to be the same dimensions.)
As far as using pixel values as inputs, it is very common to try to gather a higher level representation of the image than its pixels (more info). For example, given a 5 x 5 (normalized) gray scale image:
[1 1 1 1 1]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
We could use a the following feature matrices to help discover horizontal, vertical, and diagonal features of the images. See python haar face detection for more information.
[1 1] [0 0] [1 0] [0 1] [1 0], [0 1]
[0 0], [1 1], [1 0], [0 1], [0 1], [1 0]
To build the input vector, v, for this image, take the first 2x2 feature matrix and "apply" it with element-wise multiplication to the first position in the image. Applying,
[1 1] (the first feature matrix) to [1 1] (the first position in the image)
[0 0] [0 0]
will result in 2 because 1*1 + 1*1 + 0*0 + 0*0 = 2. Append 2 to the back of your input vector for this image. Then move this feature matrix to the next position, one to the right, and apply it again, adding the result to the input vector. Do this repeatedly for each position of the feature matrix and for each of the feature matrices. This will build your input vector for a single image. Be sure that you build the vectors in the same order for each image.
In this case the image is black and white, but with RGB values you could extend the algorithm to do the same computation but add 3 values to the input vector for each pixel--one for each color. This should provide you with one input vector per image and a single input to each neuron. The vectors will then need to be normalized before running through the network.
Normally a single neuron takes as its input multiple real numbers and outputs a real number, which typically is calculated as applying the sigmoid function to the sum of the real numbers (scaled, and then plus or minus a constant offset).
If you want to put in, say, two RGB vectors (2 x 3 reals), you need to decide how you want to combine the values. If you add all the elements together and apply the sigmoid function, it is equivalent to getting in six reals "flat". On the other hand, if you process the R elements, then the G elements, and the B elements, all individually (e.g. sum or subtract the pairs), you have in practice three independent neurons.
So in short, no, a single neuron does not take in vector values.
Use light wavelength normalized to visible spectrum as the input.
There are some approximate equations on the net.
Search for RGB to wavelength conversion
or
use HSL color model and extract Hue component and possibly use Saturation and Lightness as well. Well...
It can be whatever you want, as long as you write your inner function accordingly.
The examples you mention use [0;1] as their domain, but you can use R, R², or whatever you want, as long as the function you use in your neurons is defined on this domain.
In your case, you can define your functions on R3 to allow for RGB values to be handled
A trivial example : use (x1, y1, z1),(x2,y2,z2)->(ax1+x2,by1+y2,cz1+z2) as your function to transform two colors into one, a b and c being your learning coefs, which you will determine during the learning phase.
Very detailed information (including the answer to your question) is available on Wikipedia.