best result for salt and pepper [closed] - matlab

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
i have a picture with salt and pepper noise added to it. my task is to remove it using these functions :
You may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
As far as i can tell. i can use MinMax filter or Median Filter.
my task is to get the best results! so far i think Median filter will get me that.
is median filter really better than doing MinMaxMinMax.... ? is there a better way to get better results?

A median filter is a great way and text book approach to address salt an pepper noise. Matlab offers a function medfilt2 for this purpose, but of course you can also code your own by calculating the center intensity of a 3x3 window w using median(w(:)).

Related

How to choose the number of filters in each Convolutional Layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When building a convolutional neural network, how do you determine the number of filters used in each convolutional layer. I know that there is no hard rule about the number of filters, but from your experience/ papers you have read, etc. is there an intuition/observation about number of filters used?
For instance (I'm just making this up as example):
use more/less filters as the network gets deeper.
use larger/smaller filter with large/small kernel size
If the object of interest in the image is large/small, use ...
As you said, there are no hard rules for this.
But you can get inspiration from VGG16 for example.
It double the number of filters between each conv layers.
For the kernel size, I usually keep 3x3 or 5x5.
But, you can also take a look at Inception by Google.
They use varying kernel size, then concat them. Very interesting.
As far as I am concerned there is no foxed depth for the convolutional layers. Just several suggestions:
In CS231 they mention using 3 x 3 or 5 x 5 filters with stride of 1 or 2 is a widely used practice.
How many of them: Depends on the dataset. Also, consider using fine-tuning if the data is suitable.
How the dataset will reflect the choice? A matter of experiment.
What are the alternatives? Have a look at the Inception and ResNet papers for approaches which are close to the state of the art.

Sentiment Analysis for product rating [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hy, I am working on project that based on sentiment analysis for product rating.
I have data set for good words and Negative words. When any user comment on website for product it will rate automatically out of 10
So i am confused with clustering technique and ago that solve my problem Plzzx Help
Thanks in Advance.
You are basically asking us what would be best for you to use as a classifier for your program while we have to idea how is your data stored.
However, it seems you only have two classes, positive and negative. And you want to classify new data based on word analysis of the data.
I have worked earlier in such problem, I used Rocchio's TF-IDF algorithm for such classification. You give it a set of training data (negative and positive words) and it classifies what later comes to the system.
It is based on vector classification and cosine similarity distance measure.
For more information you can read this paper.
You can find an example of how the method works (on very small data) here.
Note: the provided example is a section of a project I worked on.

what are the conclusions obtained from this box plot? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have plotted the standard deviation of different regions.Can anyone help me to get the conclusions from this boxplot. I just want to conclude the properties of regions. In this figure, eigth object is odd one. What is the significance of whiskers?
How to change the xlabel as region1 ,region2 etc
Coclusions: wide part of your data does not follow a normal distribution. You need something like Violin Plots to see what is rally happening in your data.
Specially for 3-7, as it seems that the number of the outliers is too big.
But remember: Conclusions are obtained from data, not from the plotting option you chose for your data!
about changing the xlabel.... have you tried the function xlabel....?

matlab indices not in range? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to write the optimal quantization for IP.
I'm new to matlab and in this code, i'm trying to go over every pixel in every interval of Z, multiply it with it's histogram and sum it , so I can calculate the optimal Q.
problem : Attempted to access hist(257);index out of bounds because numel(hist)=256.
for i=1:K,
for j=(Z(i)):Z(i+1),
sum1=(j)*hist(j+1)+sum1;
count=count+hist(j+1);
end
end
The error is telling you that you cannot access hist(257) because the array hist only has 256 elements in it. Note that hist is also a built in function name so you really ought to consider giving your variable a different name.
How to solve:
Think carefully about your code, and what you are trying to achieve. What are Z. hist and K? What is the largest value that j can reach (=Z(i+1))? That is the value with which you are indexing hist, and apparently hist is not that big. What then is the shape of each variable?

Image processing: Minimizing laplacian norm with boundary conditions [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to minimize this function (by A):
argmin_A (||L(A)||^2 + a||A-B||^2*)
where:
A is a MxN image L is the Laplacian Operator
||.|| is the usual norm (Frobrenius)
a is a weight parameter
B is a matrix of size (M+2*k)xN
where k is an integer parameter.
(*) indicates that we just consider the pixels in the boundary (we want to preserve in A the pixels in the boundary of B).
Maybe the problem has a trivial solution, but I'm absolutely blocked.
If you need more details, it's (4) equation in this paper.
I will be very grateful for any help provided.
Without looking carefully at that paper, gridfit does essentially that, although it does not employ the boundary conditions you ask for. You could certainly write what you want however, or you could simply apply a weight to the boundary points. This is not a difficult problem, but in order to solve it efficiently, you would want to employ the sparse matrix capabilities of MATLAB.