I guess my question is very simple, but anyway...
I've created neural network using
net = newff(entry_borders, [20, 10], {'logsig', 'logsig'}, 'traingdx');
where entry_borders is an array 50x2: [(0,1), (0,1), ...]
It must be a network with a hidden layer with 50 entries and 10 outputs, isn't it?
But when I run this:
test_result = sim(net, zeros(50));
disp(test_result);
I get matrix with 10x50 elements in test_result (instead of 10 scalar values) - what's that?? I'm not speaking about the teaching process that's why here's so sily code...
zeros(50) gives you a 50x50 matrix, so it is treated as 50 examples (each of dimension 50), which gives 50 predictions (each of size 10)
Related
In the snippet:
criterion = nn.CrossEntropyLoss()
raw_loss = criterion(output.view(-1, ntokens), targets)
output size is torch.Size([5, 5, 8967]), targets size is torch.Size([25]), and ntokens is 8967
After modifying the code, my
output size is torch.Size([5, 8967]) and targets size is torch.Size([25])
which rises dimensionality issues when computing the loss.
Is it sensible to increase the size of my Linear activation that produces the output by 5, so that I can resize the output later to be of the size torch.Size([5, 5, 8967])?
The problem with increasing the size of the tensor is that ntokens can become quite large and I can easily run out of memory because of that. Is there an alternative approach?
You should do something like this:
ntokens = 8000
output = Variable(torch.randn(5, 5, ntokens))
targets = Variable(torch.from_numpy(np.random.randint(0, ntokens, size=25)))
criterion = nn.CrossEntropyLoss()
loss = criterion(output.view(-1, ntokens), targets)
print(loss)
This prints:
Variable containing:
9.4613
[torch.FloatTensor of size 1]
Here, I am assuming output contains predictions of next word for 5 sentences (minibatch size is 5) and each sentence is of length 5 (sequence length is 5). 8000 is the vocabulary size, so your model is predicting a probability distribution over the entire vocabulary.
Now, you can compute the loss of predicting each word as your target shape is 25 as required.
Please note, CrossEntropyLoss expects input to contain scores for each class. So, input has to be a 2D Tensor of size (minibatch, C) and the target has to be a class index (0 to C-1) for each value of a 1D tensor of size minibatch.
i have data in .txt format and successfully imported data to a variable V which is 8200x1 matrix. Now I need to get average for every 10 values. Can any one help me with the code?
I think you are looking for colfilt. You can take average every 10 values as: [1,...,10] then [2,...,11] then [3,...,13] etc. as follows:
a=randi(10,[8200 1]);
b=colfilt(a,[10 1],'sliding',#(x) mean(x))
If you want to average over distinct blocks of 10 values as: [1,...,10],[11,...,20] etc., then just replace 'sliding' with 'distinct'.
You can do the same operation with blockproc and nlfilter but colfilt executes faster as stated in Mathworks colfilt documentation.
If you want the average of each separate block of size 10: reshape into a 10-row matrix and then average each column:
n = 10;
result = mean(reshape(V, n, []), 1);
If you want the average on a sliding window of length 10: use convolution:
result = conv(V, ones(1,n)/n, 'valid');
Hi i would like to make a MLE estimate of my parameters using the built in functions in matlab. Here is what matlab says:
phat = mle(data,'distribution',dist)
I don't know how to use the vector "data". Suppouse I have 340 observations giving 0 , 120 observations at 2 , and 90 observations at 10
so how should the vector look like? [340,0,120,0,0,0,0,0,0,0,90] ? i doubt it. I just want to know the "structure" of the vector
Seems that mle() function can only handle scalar (1-D) data.
So if you want to estimate the class conditional distribution Pr[X = x|Y = 0], Pr[X = x|Y = 2] and Pr[X = x|Y = 10], then you need to split the sample data into three groups and call mle() three times. And for each call, you put all data points into one vector as the first argument.
I have a structure named 'data' with 100 entries, each corresponding to a participant from an experiment. Each of the 100 entries contains multiple 6x6 matrices giving different values.
For instance, an example of a matrix from my first participant is:
data.p001.matrixCB
18.9737 17.0000 14.2829 12.4499 11.7898 10.0995
18.1384 16.0000 13.4907 11.7898 11.2250 10.3441
14.7986 12.5300 11.7898 11.7473 12.2066 9.3808
14.3527 13.4536 12.9615 13.3417 12.7279 11.7047
18.0278 17.8885 17.6068 17.4642 17.1464 16.6132
24.1661 24.7790 23.7697 23.3880 22.6495 23.8537
...and this is one of 100 entries in the structure with a similar setup.
I'd like to get the mean average value for each cell in the matrix across my 100 participants. So I would have a mean value for the 100 values in position matrixCB(1,1), and all other positions in the matrix. Unfortunately I can't see how this is done, and the help functions are less than helpful. Any assistance would be greatly appreciated!
You can sum all your 100 matrix into Sum and then divide it by 100 - Sum./100 and then each cell would represent the avg of all 100 cells on each index .
For example -
Sum = A + B ;
Sum./2 ;
Structures can be a pain. To avoid typing out a bunch of code, you could take the following approach:
Convert required matrices to cell array
Reshape the cell array into 3D matrix
Compute means across 3rd dimension
Code for this:
Mcell = arrayfun(#(x) data.(sprintf('p%03d',x)).matrixCB, 1:100, 'uni', 0);
M = mean( reshape(cell2mat(Mcell), 6, 6, []), 3 );
I have a fairly large vector named blender. I have extracted n elements for which blender is greater than x (irrelevant). Now my difficulty is the following:
I am trying to create a (21 x n) matrix with each element of blender plus 10 things before, and the 10 things after.
element=find(blender >= 120);
I have been trying variations of the following:
for i=element(1:end)
Matrix(i)= Matrix(blender(i-10:i+10));
end
then I want to plot one column of the matrix at the time when I hit Enter.
This second part I can figure out later, but I would appreciate some help making the Matrix
Thanks
First, you can use "logical indexing" of your array, which uses a logical expression do address your vector. With blender = [2, 302, 35, 199, 781, 312, 8], it could look like this:
>> b_hi = blender(blender>=120)
b_hi =
302 199 781 312
Second, you can concatenate arrays like in b_padded = [1, 2, b_hi, 3, 4]. If b_hi was a column vector, you'd use semicolons instead of commas.
Third, there is a function reshape that allows you to turn the resulting vector into a matrix. doc reshape will tell you details. For example, to turn b_padded into a 2-by-4 matrix,
>> b_matrix = reshape(b_padded, 4, 2)
b_matrix =
1 302 781 3
2 199 312 4
will do. This means you can do all of the job without any for-loop. Note that transposing the result of reshape(b_padded, 2, 4) will give you the other possible 2-by-4 matrix. You obtain the transpose of a matrix A by A'. You will find out which one you want.
You need to create a new matrix, and use two indices so that Matlab knows it is assigning to a column in a 2D matrix.
NewMatrix = zeros(21, length(element));
for i = 1:length(element)
k = element(i);
NewMatrix(:,i)= Matrix(blender(k-10:k+10));
end