I checked Keras documentation, but not get any specific explanation? Does it just simply repeat the value alone a certain axis?
Yes, the values are just repeated.
You have UpSampling1D/2D/3D, 1D will repeat each step size and 2D will repeat each column/row.
Related
Having a saliency map of an image (it has values between 0 and 1), my aim is to compute its globale score of saliency. I'm a bit confused, I don't know if I have to use the 'mean' or the 'median'.
The problem of the mean is that low saliency values will pull down the global score of saliency.
What kind of summons could I use for this question ?
Thanks in advance.
I think you want to compare if one image is more salient than another. Is that right?
First please check if the saliency map is already normalized. Many algorithms do this so that all images have the same mean (e.g., 0.5).
If there is no per image normalization, and you do not want to use the mean or median, perhaps you can use the mode.
Two sample outputs would be helpful. :)
I am working on a project with lasagne and theano and need to create a custom layer.
The output of this layer does not however depend on the size of the input, but on the values of the input...
I know that keras (only with the tensorflow backend) offers the possibility of lambda layers, and I managed to write an expression which allowed me to have the output depending on the values of the input. But I don't know how or even if it is possible to do so using lasagne and theano.
For example: if my input tensor has a fixed size of 100 values, but I know that at the end there could be some 0 values, which do not influence the output of the network at all, how can I remove those values and let only the values with information go further to the next layer?
I would like to minimize the space requirements of the network :)
Is there a possibility to have a layer in lasagne like that? If so, how should I write the get_output_shape_for() method?
If not, I'll switch to keras and tensorflow :D
Thanks in advance!
Thanks to Jan Schlüter for providing me with the answer here:
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/lasagne-users/ucjNayfhSu0
To summarize:
1) Yes, it is possible to have a lasagne layer whose output shape depends on the input values (instead of the input shape) and
2) You must write "None" in the dimensions which do hot have a fixed compile-time shape (so the changed dimensions that depend on the input values).
Regarding the example:
You can compute the output shape first, then create a new tensor with the shape of the length of non-zero entries in the original tensor and then fill the new tensor with the non-zero values (e.g. using the theano.tensor.set_subtensor function). However, I don't know if this is the optimal solution to achieve this result...
I have 50 images and created a database of the green channel of each image by separating them into two classes (Skin and wound) and storing the their respective green channel value.
Also, I have 1600 wound pixel values and 3000 skin pixel values.
Now I have to use bayes classification in matlab to classify the skin and wound pixels in a new (test) image using the data base that I have. I have tried the in-built command diaglinear but results are poor resulting in lot of misclassification.
Also, I dont know if it's a normal distribution or not so can't use gaussian estimation for finding the conditional probability density function for the data.
Is there any way to perform pixel wise classification?
If there is any part of the question that is unclear, please ask.
I'm looking for help. Thanks in advance.
If you realy want to use pixel wise classification (quite simple, but why not?) try exploring pixel value distributions with hist()/imhist(). It might give you a clue about a gaussianity...
Second, you might fit your values to the some appropriate curves (gaussians?) manually with fit() if you have curve fitting toolbox (or again do it manualy). Then multiply the curves by probabilities of the wound/skin if you like it to be MAP classifier, and finally find their intersection. Voela! you have your descition value V.
if Xi skin
else -> wound
I am currently working on an indoor navigation system using a Zigbee WSN in star topology.
I currently have signal strength data for 60 positions in an area of 15m by 10 approximately. I want to use ANN to help predict the coordinates for other positions. After going through a number of threads, I realized that normalizing the data would give me better results.
I tried that and re-trained my network a few times. I managed to get the goal parameter in the nntool of MATLAB to the value .000745, but still after I give a training sample as a test input, and then scaling it back, it is giving a value way-off.
A value of .000745 means that my data has been very closely fit, right? If yes, why this anomaly? I am dividing and multiplying by the maximum value to normalize and scale the value back respectively.
Can someone please explain me where I might be going wrong? Am I using the wrong training parameters? (I am using TRAINRP, 4 layers with 15 neurons in each layer and giving a goal of 1e-8, gradient of 1e-6 and 100000 epochs)
Should I consider methods other than ANN for this purpose?
Please help.
For spatial data you can always use Gaussian Process Regression. With a proper kernel you can predict pretty well and GP regression is a pretty simple thing to do (just matrix inversion and matrix vector multiplication) You don't have much data so exact GP regression can be easily done. For a nice source on GP Regression check this.
What did you scale? Inputs or outputs? Did scale input+output for your trainingset and only the output while testing?
What kind of error measure do you use? I assume your "goal parameter" is an error measure. Is it SSE (sum of squared errors) or MSE (mean squared errors)? 0.000745 seems to be very small and usually you should have almost no error on your training data.
Your ANN architecture might be too deep with too few hidden units for an initial test. Try different architectures like 40-20 hidden units, 60 HU, 30-20-10 HU, ...
You should generate a test set to verify your ANN's generalization. Otherwise overfitting might be a problem.
I have two classes of data which are plotted in 2D and I wish to plot the nearest-neighbours decision boundary for a given value of k.
I realise that there is a similar example provided in Matlab's 'classify' help doc, however I do not see how I can use this within the context of k nearest-neighbours.
Thanks,
Josh
I think since you are in 2D space easiest would be to do the brute force approach, iterate over all (x,y) at a fixed resolution. For each point determine its class (or likelihood) and plot the values as an image.