Primal form of Soft margin SVM implementation in matlab [closed] - matlab

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am implementing primal form of soft margin svm. After calculating the weight and bias parameters, I need to test it on test data. How can I achieve that? I need to calculate the accuracy of the test data classification. Thank you.
Thank you.

The classification rule of your SVM is (no matter if you trained it in soft or hard margin rule):
cl(x) = sign(<w, x> - b) = sign( SUM_i w_i x_i - b )
where w_i are your coefficients and b is a bias.
Just take your test set, pass it through this rule and calculate the fraction of correct predictions (accuracy).

Related

Regression Matlab [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 days ago.
Improve this question
Identification of brain areas after language stimuli
Hi guys, I'm trying to solve a problem with my code in matlab. My goal is to identificate brain areas associated to language stimuli. After I performed some tasks, i arrived to plot the convolution between the BOLD signal and square signal (stimuli). Now, my goal is to perform a regression using regress between the convolution result (1x481 matrix) and a matrix containing the number of voxels of my brain images (92614x39) where the rows are the voxels and 39 are the MRI scans. So, the two matrices have different size and I have no idea how to perform a regression. Anyone could help me?

What is the difference between entropy, energy, mean, skewness, variance, inertia and kurtosis in image processing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I am reading in Feature Extraction in Medical Imaging field, especially in brain tumor detection, and I had found the above feature extraction and I don't understand the difference between them.
The various features that can be calculated from the co-occurrence matrices (C) are inertia (contrast), absolute value, inverse difference, energy, and entropy. Contrast is the element difference moment of order 2, which has a relatively low value when the high values of C are near the main diagonal. Energy value is highest when all values in the co-occurrence matrix are all equal. (Sigma(i) Sigma (j) cij^2). Entropy of the image is the measure of randomness of the image gray levels. ( -1*Sigma(i) Sigma (j) cij^2 * log cij^2 (with base 2))

What does linsolve(A,B) return when the number of equations is larger then number of variables? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I by accident put a matrix A with far more rows then columns into linsolve(A,B). So it should be inconsistent system of equations. However what I got was a 'solution' which fits my task far better. So what exactly does it return when you have more columns then rows?
What you have seems to be an overdetermined linear system, that can be solved by the least-square method.
If your matrix A has more rows than columns (m > n) it means that you have more equations than unknowns, so an exact solution can be almost impossible to find. What you can obtain is a good enough solution that minimizes the error.
You can refer to the page Overdetermined system for more insights.

Single Neuron Neural Network - Types of Questions? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Can anybody think of a real(ish) world example of a problem that can be solved by a single neuron neural network? I'm trying to think of a trivial example to help introduce the concepts.
Using a single neuron to classification is basically logistic regression, as Gordon pointed out.
Logistic regression is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more metric (interval or ratio scale) independent variables. (statisticssolutions)
This is a good case to apply logistic regression:
Suppose that we are interested in the factors that influence whether a political candidate wins an election. The outcome (response) variable is binary (0/1); win or lose. The predictor variables of interest are the amount of money spent on the campaign, the amount of time spent campaigning negatively and whether or not the candidate is an incumbent. (ats)
For a single neuron network, I find solving logic functions a good example. Assuming say a sigmoid neuron, you can demonstrate how the network solves AND and OR functions, which are linearly sepparable and how it fails to solve the XOR function which is not.

Image processing: Minimizing laplacian norm with boundary conditions [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to minimize this function (by A):
argmin_A (||L(A)||^2 + a||A-B||^2*)
where:
A is a MxN image L is the Laplacian Operator
||.|| is the usual norm (Frobrenius)
a is a weight parameter
B is a matrix of size (M+2*k)xN
where k is an integer parameter.
(*) indicates that we just consider the pixels in the boundary (we want to preserve in A the pixels in the boundary of B).
Maybe the problem has a trivial solution, but I'm absolutely blocked.
If you need more details, it's (4) equation in this paper.
I will be very grateful for any help provided.
Without looking carefully at that paper, gridfit does essentially that, although it does not employ the boundary conditions you ask for. You could certainly write what you want however, or you could simply apply a weight to the boundary points. This is not a difficult problem, but in order to solve it efficiently, you would want to employ the sparse matrix capabilities of MATLAB.