I have calculated texture, color and shape features on an image.But those just add up to 12 features. I read that people extract 1000 features and such. Could someone please explain to me how do i increase the number of features? And then how do i save them to form a feature vector?
Features are the most significant or interest points of an image.In general lets say I am interested in the edges information of an image.As we know edges are found by laplacian filter in spatial domain so the only points that will remain in the image will be edges point.Each edge point will have its x,y co-ordinate followed by intensity value.These three information of all the interest points would probably lead to multi-dimension feature vector in this case which would be depend on the type of image you are taking.These three multidimensional information will be called my feature vector.
Similarly histogram of an image can also be your feature which will range from 0 to 255 value for an gray scale image.So in that case you can store for every image this 255 values as features.
Hope you got the idea.Image is an subjective thing so depending on any application and given data set we will extract features and form feature vectors.
Apart from color,texture and shape you can even work on the signatures,edges,histogram and so on properties of an image.
Related
Sorry, it is all by one question but relate to many small questions. I can't split them into seperated questions.
For example, input picture size 960x640
Through VGG16 layer 13 Conv5_3, get feature_map 60x40x512
Do 3x3 convolution.
3.1 How 3x3 convolution compress the output above to 1x512 ?
3.2 I read some article said, RPN would random select 512 samples from 2000 anchors . If 1x512 matrix mean this , what is 3x3 convolution doing ?
Loop feature_map, with 16 stride and 16 scale to find the center of the original map (corresponding feature map current point), cut 9 anchors out, calculate IoU< 0.3 as neg samples and IoU > 0.7 as pos samples.
4.1 If there are several points on the feature_map, how to cover the GT? I mean, because it need IoU > 0.7 to label pos sample, here IoU refers to [ the intersection of the area(map from this point to the original image) and GT], or [all the area of GT]? I think it should be the former.
After all the loops are over, filter out the positive and negative samples by nms . Is it possible to have multiple anchors in a single point, or is nms sure to filter this out?
Pass to softmax.
6.1 My problem is , in many cases, the positions(labeled by positive and negative) of the points on the feature_map are different. Because the position of the parameter is also fixed at a specific position on the feature_map, how to find proposals from an image in the detect phase?
6.2 random selection of anchors is at here? ? ?
RoI merges feature_map and proposals to do pooling. (1. roi (roi said a group of anchors it) is located in the feature map , and get the patch zone in feature map 2. something like SPP layer(7x7 down sampling ) is applied to the feature map patch, transform to fixed size of features, to fit full connection layer)
Another softmax. (Training phase using BP to tune the parameters), my problem is that in many cases the positions of the points on the feature_map each time labeled positive and negative are different. Because the position of the parameter is also fixed at a specific position on the feature_map, how to find proposals from the image in the detect phase?
RoI compare to GT, do reggession.
After finishing the above questions, re-think again. I found my understanding of anchors, proposals a bit confusing. Does many anchors compose to a proposal?
If so, then the above 6 becomes
Select 512 anchors , pass their parameters into softmax, the output show if it is part of the target object. So this layer is the detect phase. When doing detect phase, just loop all the anchors to get the possible ones .
6.1 But in this case, how RPN output bbox size (x, y, w h) ? I think it need merge selected anchors and then scale to the size of the original image , to get the bbox size.
6.2 If operation is merger , then randomly selected 512 from 2000 is likely to miss some areas, isn't it ?
Mainly is 3 and 6, and I think all of them are highly relative can not be seperated. Some are just need yes or no confirm, thanks
I am trying to erode objects in a binary image such that they do not become smaller than some fixed size. Consider, for instance, a binary map composed of connected components (blobs), wherein one defines blob size by either the minimal or maximal antipolar (anti-perimetric) distance (i.e., the distance between two points that are as far from one another as they can be on the perimeter or contour of the blob; if the contour consists of N consecutively numbered points, then the distances evaluated would be those between points 1 and N/2+1, points 2 and N/2+2, etc.). Given such an arrangement, I seek to erode these blobs until the distance metric reaches a specified limit. If the blobs were simple circles, then the effect could be realized by ultimate erosion followed by dilation to a fixed size; however, the contour of an irregular object would be lost by such a procedure. Is there a way to achieve such an effect for connected, irregular components using built-in functions in MATLAB?
With no image and already tried code presented I can understand you wrong, but may be iterative using bwmorph with 'thin','skel' or 'shrink' will help you.
while(cond < cond_threshold)
bw=bwmorph(bw,...,1); %one of the options above
cond = calc_cond(bw);
end
I was wondering if anyone knows which kind of filter is applied by SPCImage from the Becker & Hickl system.
I am taking some FLIM data with my system and I want to create the lifetime images. For doing so I want to bin my images in the same way as it does SPCImage, so I can increase my SN ratio. The binning goes like 1x1, 3x3, 5x5, etc. I have created the function for doing a 3x3 binning, but each time it gets more complicated...
I want to do it in MATLAB, and maybe there is already a function that can help me with this.
Many thanks for your help.
This question is old, but for anyone else wondering: You want to sum the pixels in an (2M+1) x (2M+1) neighborhood for each plane (M integer). So I am pretty sure you can go about the problem by treating it like a convolution.
#This is your original 3D SDT image
#I assume that you have ordered the image with spatial dimensions along the
#first and second and the time channels are the third dimension.
img = ... #<- your 3D image goes here
#This describes your filter. M=1 means take 1 a one pixel rect around your
#center pixel and add the values to your center, etc... (i.e. M=1 equals a
#total of 3x3 pixels accumulated)
M=2
#this is the (2D) filter for your convolution
filtr = ones(2*M+1, 2*M+1);
#the resulting binned image (3D)
img_binned = convn(img, filtr, 'same');
You should definitely check the result against your calculation, but it should do the trick.
I think that you need to test/investigate image filter functions to apply to this king of images Fluorescence-lifetime imaging microscopy.
A median filter as showing here is good for smoothering things. Or a weihgted moving average filter where applied to the image erase de bright spots and only are maintained the broad features
So you need to review of the digital image processing in matlab
Currently I hope to use scale space representation to filter one image. Features in one image can be filtered using an Gaussian smooth filter with one optimal sigma. It means different features in one image can be expressed best in different scale under scale space representation.
For example, I have one image with one tree in it. In the scale space representation, three sigma values are used and they are represented as sigma0, sigma1 and sigma2. The ground is best expressed in the smoothed image with sigma0 because it contains textures mainly. The branches are best expressed in the smoother image with sigma1 and the trunk is with the smoother image with sigma2. If I hope to filter the image, I hope that the filtered pixels for the group is from the smoothed image with sigma0.
The filtered pixels for the branches are from the smoothed image with sigma1. The filtered pixels for the trunk are from the smoothed image with sigma2.
It requires that I need to determine in which smoothed image one pixel is expressed best. Is this idea plausible?
I am trying to use differece-of-Gaussian of two successive smoothed images to perform the above task. Is there any other way to combine the three smoothed image?
I use Matlab to implement the idea. The values of the three sigmas is 1.0, 2.0 and 3.0. The corresponding size of Gaussian kernel is 3, 5 and 7. I use the function fspecial to generate the kernel. Are the parameter reasonable? Please share your experience with the scale space representation to help me. You can provide some links to useful papers.
your idea is very much plausible! You are just one step away from it. I did something very similar once and it looked like this:
After smoothing your images and extracting the edges for each smoothing step (I used a weighted [to compensate for maxima supression after Gauss filtering] Sobel filter for this since DOG was not quite stable for my aplication), you can proyect (and normalize) your whole stack of edge images into a single image ("cummulative edges") which will contain the characteristic edges. You can then compare the cummulative edges image (using cross-correlation or whatever you wish) with every single image in your edge stack, the biggest value of this comparation is then the smooth-scale in which the pixel is expressed the best.
Hope that makes sense for you after reading it a couple of times.
Also don't be afraid of using much bigger kernel sizes, while it all depends on your application, I ended up using things of 51 and bigger!!! (was working with 40MP images though...)
T. Lindeberg has literally dozens of papers related to this problem. I found this one the most useful, but since you are already in the right track, I don't think reading the 50 pages will make you that much smarter. The most important part of it is maybe this one:
Principle for scale selection:
In the absence of other evidence, assume that a scale level, at which some
(possibly non-linear) combination of normalized derivatives assumes a
local maximum over scales, can be treated as reflecting a characteristic
length of a corresponding structure in the data.
I want to create a 5 dimensional plotting in matlab. I have two files in my workspace. one is data(150*4). In this file, I have 150 data and each has 4 features. Since I want to classify them, I have another file called "labels" (150*1) that includes a label for each data in data files. In other words the label are the class of data and I have 3 class: 1,2,3
I want to plot this classification, but i can't...
Naris
You need to think about what kind of plot you want to see. 5 dimensions are difficult to visualize, unless of course, your hyper-dimensional monitor is working. Mine never came back from the repair shop. (That should teach me for sending it out.)
Seriously, 5 dimensional data really can be difficult to visualize. The usual solution is to plot points in a 2-d space (the screen coordinates of a figure, for example. This is what plot essentially does.) Then use various attributes of the points plotted to show the other three dimensions. This is what Chernoff faces do for you. If you have the stats toolbox, then it looks like glyphplot will help you out. Or you can plot in 3-d, then use two attributes to show the other two dimensions.
Another idea is to plot points in 2-d to show two of the dimensions, then use color to indicate the other three dimensions. Thus, the RGB assigned to that marker will be defined by the other three dimensions. Of course, that means you must be able to visualize what the RGB coordinates of a color represent, so you need to understand color as it is represented in an RGB space.
You can use scatter3 to plot your data, using three features of data as dimensions, the fourth as color, and the class as different markers
figure,hold on
markerList = 'o*+';
for iClass = 1:nClasses
classIdx = dataClass==iClass;
scatter3(data(classIdx,1),data(classIdx,2),data(classIdx,3),[],data(classIdx,4),...
'marker',markerList(iClass));
end
When you use color to represent one of the features, I suggest to use a good colormap, such as pmkmp from the Matlab File Exchange instead of the default jet.
Alternatively, you can use e.g. mdscale to transform your higher-dimensional data to 2D for standard plotting.
There's a model called SOM (Self-organizing Maps) which builds a 2-D image of a multidimensional space.