Texture feature extraction using Gray Level Cooccurence Matrix - matlab

I'm doing a project in liver tumor classification. I used this code and it gave some output. I don't know whether I'm correct.
Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. So, to this GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct.
I gave the parameters exactly as in the example. Actually what do they mean? Do I need to change them for different images? If so, how to give the parameters? I'm completely new to this. So, kindly guide me.
I got this output. Am I correct?
stats =
autoc: [1.857855266614132e+000 1.857955341199538e+000]
contr: [5.103143332457753e-002 5.030548650257343e-002]
corrm: [9.512661919561399e-001 9.519459060378332e-001]
corrp: [9.512661919561385e-001 9.519459060378338e-001]
cprom: [7.885631654779597e+001 7.905268525471267e+001]
cshad: [1.219440700252286e+001 1.220659371449108e+001]
dissi: [2.037387269065756e-002 1.935418927908687e-002]
energ: [8.987753042491253e-001 8.988459843719526e-001]
entro: [2.759187341212805e-001 2.743152140681436e-001]
homom: [9.930016927881388e-001 9.935307908219834e-001]
homop: [9.925660617240367e-001 9.930960070222014e-001]
maxpr: [9.474275457490587e-001 9.474466930429607e-001]
sosvh: [1.847174384255155e+000 1.846913030238459e+000]
savgh: [2.332207337361002e+000 2.332108469591401e+000]
svarh: [6.311174784234007e+000 6.314794324825067e+000]
senth: [2.663144677055123e-001 2.653725436772341e-001]
dvarh: [5.103143332457753e-002 5.030548650257344e-002]
denth: [7.573115918713391e-002 7.073380266499811e-002]
inf1h: [-8.199645492654247e-001 -8.265514568489666e-001]
inf2h: [5.643539051044213e-001 5.661543271625117e-001]
indnc: [9.980238521073823e-001 9.981394883569174e-001]
idmnc: [9.993275086521848e-001 9.993404634013308e-001]
Kindly guide me. Thank you

its ok but i don't think we usually need all this extra information i usually prefer to use the following code
GLCM2 = graycomatrix(img,'Offset',[1 1]);
stats = graycoprops(GLCM2);
i hope it will help you

Related

Why is my "waterproof" polyhedron causing "WARNING: Object may not be a valid 2-manifold and may need repair!"?

In the script
difference() {
polyhedron(
points=[[0,0,0],
[2,0,0],
[2,1,0],
[0,1,0],
[0,0,2],
[0,1,2]],
faces=[[0,1,2,3],
[5,4,1,2],
[5,4,0,3],
[0,1,4],
[2,3,5]]);
cube([1,1,1]);
};
the polyhedron alone works fine (is rendered without warnings), but adding the cube above causes the warning WARNING: Object may not be a valid 2-manifold and may need repair! to be logged and the output to only render some parts of some surfaces.
I'm using OpenSCAD 2015.03-1 on Ubuntu 16.04.
This is because your polyhedron has some faces pointing into the wrong direction, causing issues when calculating the difference().
See the Manual and FAQ for details.
Changing the winding order of the affected polygons fixes the polyhedron:
difference() {
polyhedron(
points=[[0,0,0],
[2,0,0],
[2,1,0],
[0,1,0],
[0,0,2],
[0,1,2]],
faces=[[0,1,2,3],
[2,1,4,5],
[5,4,0,3],
[0,4,1],
[2,5,3]]);
cube([1,1,1]);
};
The difference is still non-manifold as cutting the cube results in 2 prism shaped objects just touching at one edge. That's also by definition not 2-manifold, so the warning remains.
Depending on how the exported model is supposed to be used, you could choose to ignore this warning and hope the tool processing the 3d model can handle that.
To remove the issue, for example the cube could be made a bit smaller like cube([1, 1, 0.999]).
An unrelated, but still useful strategy for preventing issues later on is to always make the cutting object a bit larger to ensure that no very thin planes remain, e.g. use cube([2,3,1.999], center = true). That will also remove the display artifacts in preview mode.

Any reason why these instance could be misclassified?

I started off with two files training & testing.
Then using libsvm I scaled both those files to training.scale and testing.scale
Then using grid.py (part of libsvm) I ran training.scale and and recieved some cross validation values:
C = 512
gamme = 0.03125
validation 5 = 66.8421
Then running svm-train using the variable found from grid.py and training.scale I got a new fine called training.scale.model
I then ran svm-predict and I new file called testing.predict and got a validation % of 60.8333%
Finally comparing testing and testing.predict found that there were 47/120 misclassifications
[https://drive.google.com/folderview?id=0BxzgP5V6RPQHekRjZXdFYW9GX0U&usp=sharing][1]
[1]: link to code
The real question is there any reason why these misclassification occur?
PS. I apologise for the bad format of this question, been up for too long
I am guessing you are new to machine learning. The results you've got are perfectly right.
Reason why these mis-classifications occur? The features you've used don't separate your classes well. A 66% cross-validation score should have given you the hint. Even by plain hit or miss method you'll get 50% accuracy, and the feature-set you used could only improve this by another 16%. Try exploring new features.
I'm assuming your data set is clean.

Fixed blocks in simulink diagram

Is there any solution for fixing a block in simulink diagram, to disable moving/resizing for the block ?
Is there any solution to draw kind of a shape in simulink (empty rectangles) ?
my aim is to fix an area in the model, so that the user is not allowed to design the model outside this area.
I tried using the callback functions with no success.
Thanks for any help.
As far as I know there is just a compromise.
As mentioned in the other answer you need to create a subsystem. In the block parameters you can set ReadOnly, so everything is fixed and greyed out, as you desired, or NoReadOrWrite access, so it is completely blocked. This solution works just for "naive" users as they can still change the properties to get access again. Maybe you find a way to prevent the user from entering the properties menu.
The secure way is much more complicated: protected Models
Regarding your question about the rectangular shape: I tried to find a solution for a long time and I'd say there is no way to "draw" something, though the backround is actually called "canvas" ;)
To your other comment: what is wrong about a subsystem? You can just block everything except the block you want the user to play around with. It opens in a new tab/window and it doesn't matter how big is everything. What you want is probably not possible in that manner.
You can achieve that to some extent using callback functions. For example let's have LoadFcn as:
A=get_param(gcb, 'Position');
and MoveFcn as
try
set_param(gcb, 'Position', A);
catch
end
This will prohibit moving and resizing, but not cut or delete. Obviously, this will pollute the workspace so you need to think of a way to manage that. If you want this for many blocks you can add the position to the userData property of block currBlock by
set_param(currBlock, 'UserData', get_param(currBlock, 'Position'));
and then just add this to the block's MoveFcn callback
try
set_param(gcb, 'Position', get_param(gcb, 'UserData'));
catch
end
You can even do this programmatically
moveFcn = sprintf([...
'try\n' ...
' set_param(gcb, ''Position'', get_param(gcb, ''UserData''));\n' ...
'catch\n' ...
'end\n']);
set_param(currBlock, ...
'UserData', get_param(currBlock, 'Position'), ...
'MoveFcn', moveFcn);
Did you tried using blocks? See this example: http://blogs.mathworks.com/seth/2008/07/27/how-to-make-your-own-simulink-block/

CCBezierTo easeout

Working in Objective-c at the moment.
I am drawing a path for my sprite to follow and it all seems to be working fine but i just had one question that didnt seem to be answered anywhere.
My first two points in the Bezier are rather close together in relation to the third point and when my sprite animates along this path it seems like it is being eased in to the animation with an abrupt stop at the end.
Is there a way to control this i'd like to have the animation be one consistent speed or possibly be eased out?
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
Give this a try:
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
id easeBezierForward = [CCEaseOut actionWithAction:bezierForward rate:2.0]
[turkey runAction:easeBezierForward];
You will want to play with the rate value to see what ends up looking best to you. You may have to try out some of the other CCEaseOut options like CCEaseSineOut
Link: Cocos2d Ease Actions Guide
Should probably be something like this, according to the docs:
id bezierForward = [CCEaseOut actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
As stated in the docs:
Variations
CCEaseIn: acceleration at the beginning
CCEaseOut: acceleration at the end
CCEaseInOut: acceleration at the beginning / end

iRobot Create not returning sensor data

I am trying to stream sensor data from the iRobot Create. I get tuple out of range errors when I try
bot.stream_sensors(somenumber) and bot.poll_sensors(somenumbers). Whenever I input bot.sensors, I just get an empty array {}. I have even tried sending bot.sensors while pushing in on the bump sensor, still getting an empty array. I am connected to the bot through the Serial port with a serial-to-usb converter on my side. The only code before trying to get the sensor data is
import openinterface
bot = openinterface.CreateBot(com_port="/dev/ttyUSB0", mode="full")
Does anyone have an idea of how to solve this issue? Everywhere else just uses stream_sensors(6) and it seems to work fine.
P.S. I posted a question similar to this topic not too long ago, but no one responded. Not trying to spam, but now I have a more clear question and what the apparent-problem is so I thought I would try again.
I downloaded openinterface.py from this site: which included some sample programs. I'd suggest you take a step back, try the sample code, try to find other, more sophisticated, sample code and play with that first before moving on to your real code. You may be missing a step somewhere.
I may be a bit late to answer this, but for reference purposes. Directly controlling the iRobot is simplified greatly by using
Pyrobot.