I have a vector, let's say its dimensions is 1x100. If I want to remove the last 10 elements, I can do the following
a(91:100)=[];
However, if I want to simultaneously crop out 2 different sections, how do I do it? If I want to crop the elements from 50 to 60 as well, how do I do it?
Related
I have a vector of the size 10000x1, which I reduced to 4400x1. I want to make a stacked bar with the single entries of this vector. Each segment of the stacked bar should be colored depending on the distribution frequency. For example: if a value is represented 10 times, it should almost be black. if its only 1, it should be light grey.
1st problem: When I use a small vector, I get a stacked bar. If I use the one I need to I get multiple bars instead of one stacked bar.
2nd problem: How can I make the bar use a gray scale to color them?
I tried to transpose my vector to 1x4400 which didn't change anything.
I used bar(data,'stacked') which works for the small vector (at least I get a stacked bar. To get a single bar I used:
test = [1 2 3 4 5 1 2 3];
bar([test; nan(size(test))],'stacked');
size of data : 4400x1
bar([data; nan(size(data))],'stacked');
When I enter your first example, with an 1x8 matrix:
test = [1 2 3 4 5 1 2 3];
bar([test; nan(size(test))],'stacked');
I see the same plot you see. Then I transpose test to make it an 8x1 matrix:
test = test.';
bar([test; nan(size(test))],'stacked');
and I see something similar to your second plot:
Note how the NaNs cause the right half of the x-axis to have no bars.
Thus, to get stacked bars you need to have a row vector. However, your data is a column vector, a 4400x1 matrix. data = data.' will change that.
Nonetheless, it is likely that MATLAB is not able to make a stacked bar plot with 4400 elements. I'm trying this out on MATLAB Online, and I've been waiting for several minutes and still don't have a plot. In any case, there is no way you would become any wiser with such a plot. Don't do it!
Regarding your second question: you can change the plot color order to gray:
set(gca,'colororder',flipud(gray(256)))
However, the stacked bars are not colored according to their value, they just use the colors of the list in turn.
A better solution would be to plot your data as an image:
test = [1 2 3 4 5 1 2 3];
imagesc(test.')
colormap(flipud(gray(255)))
set(gca,'xlim',[0,2],'xtick',1)
[I just realized you might want to straighten the y-axis, imagesc inverts it. Simply: set(gca,'ydir','normal').]
I want to display an image (e.g.imshow) and use a colormap to represent the values of my data points.
However, colormap only gives the option to be dependent on a single variable, but I want a "2D colormap" which depends on two variables.
For example I have a simple image 2x2 pixels:
img = [
1 1 5 6;
1 2 8 7;
2 1 4 3;
2 2 15 3]
Here the first two values of each row are the coordinates, the other two are the values describing the pixel (call them x and y).
When displaying the image I want to use a 2D colormap. For example something like this, which picks a colour depending on both variables (x and y):
Is there an option in MATLAB do to this, possibly in one of the extra toolboxes?
If not can this be done manually? I was thinking by overlaying a grey scale image given from the first value over a colormap image given by the second value a similar effect could be achieved.
In your 2D colormap you are actually using the HSV color space.
Basically, your x axis is Hue, and Y axis is Saturation. You can convert any value into this space if its properly scaled. If you make sure that you scale your 3rd and 4rd column in the [0-1] interval you can easily do
colorRGB=hsv2rgb([val3,val4,0.5]);
If you perform this operation for each pixel, you'll get the image you want.
I gave a extended explanation of how HSV works here
I have an image that was read in using the imread function. My goal is to collect pairs of pixels in an image in MATLAB. Specifically, I have read a paper, and I am trying to recreate the following scenario:
First, the original image is grouped into pairs of pixel values. A pair consists of two neighboring pixel values or two with a small difference value. The pairing could be done horizontally by pairing the pixels on the same row and consecutive columns, or vertically, or by a key-based specific pattern. The pairing could be through all pixels of the image or just a portion of it.
I am looking to recreate the horizontal pairing scenario. I'm not quite sure how I would do this in MATLAB.
Assuming your image is grayscale, we can easily generate a 2D grid of co-ordinates using ndgrid. We can use these to create one grid, then shift the horizontal co-ordinates to the right to make another grid and then use sub2ind to convert the 2D grid into linear indices. We can finally use these linear indices to create our pixel pairings that you have described in your comments (you should really add that to your post BTW). What's important is that you need to skip over every other column in a row to ensure unique pixel pairings.
I'm also going to assume that your image is grayscale. If we go to colour, this will be slightly more complicated, and I'll leave that to you as a learning exercise. Therefore, assuming your image was read in through imread and is stored in im, do something like this:
[rows,cols] = size(im);
[X,Y] = ndgrid(1:rows,1:2:cols);
ind = sub2ind(size(im), X, Y);
ind_shift = sub2ind(size(im), X, Y+1);
pixels1 = im(ind);
pixels2 = im(ind_shift);
pixels = [pixels1(:) pixels2(:)];
pixels will be a 2D array, where each row gives you the pixel intensities of a particular pairing in the image. Bear in mind that I processed each row independently. As such, as soon as we are done with one row, we simply move on to the next row and continue the procedure. This also assumes that your image has an even number of columns. Should it not, you have a decision to make. You need to either pad the image with one column at the end, and this column can be anything you want, or you can remove this column from the image before processing. If you want to fill in this column, you can either make it all zeroes, or perhaps replicate the last column and place this beside the last column in the original image. Therefore, an appropriate pre-processing step may look something like this:
if mod(cols,2) ~= 0
im = im(:,1:end-1);
end
The above code simply removes the last column in the image if the number of columns is odd. Once you run through this code, you can run the first bit of code that I had above.
Good luck!
I have a (naïve probably) question, I just wanted to clarify this part. So, when I take a dsift on one image I generally get an 128xn matrix. Thing is, that n value, is not always the same across different images. Say image 1 gets an 128x10 matrix, while image 2 gets a 128x18 matrix. I am not quite sure why is this happening.
I think that each column of 128dimension represents a single image feature or a single patch detected from the image. So in the case of 128x18, we have extracted 18 patches and described them with 128 values each. If this is correct, why cant we have a fixed numbers of patches per image, say 20 patches, so every time our matrixes would be 128x20.
Cheers!
This is because the number of reliable features that are detected per image change. Just because you detect 10 features in one image does not mean that you will be able to detect the same number of features in the other image. What does matter is how close one feature from one image matches with another.
What you can do (if you like) is extract the, say, 10 most reliable features that are matched the best between the two images, if you want to have something constant. Choose a number that is less than or equal to the minimum of the number of patches detected between the two. For example, supposing you detect 50 features in one image, and 35 features in another image. After, when you try and match the features together, this results in... say... 20 best matched points. You can choose the best 10, or 15, or even all of the points (20) and proceed from there.
I'm going to show you some example code to illustrate my point above, but bear in mind that I will be using vl_sift and not vl_dsift. The reason why is because I want to show you visual results with minimal pre- and post-processing. Should you choose to use vl_dsift, you'll need to do a bit of work before and after you compute the features by dsift if you want to visualize the same results. If you want to see the code to do that, you can check out the vl_dsift help page here: http://www.vlfeat.org/matlab/vl_dsift.html. Either way, the idea about choosing the most reliable features applies to both sift and dsift.
For example, supposing that Ia and Ib are uint8 grayscale images of the same object or scene. You can first detect features via SIFT, then match the keypoints.
[fa, da] = vl_sift(im2single(Ia));
[fb, db] = vl_sift(im2single(Ib));
[matches, scores] = vl_ubcmatch(da, db);
matches contains a 2 x N matrix, where the first row and second row of each column denotes which feature index in the first image (first row) matched best with the second image (second row).
Once you do this, sort the scores in ascending order. Lower scores mean better matches as the default matching method between two features is the Euclidean / L2 norm. As such:
numBestPoints = 10;
[~,indices] = sort(scores);
%// Get the numBestPoints best matched features
bestMatches = matches(:,indices(1:numBestPoints));
This should then return the 10 best matches between the two images. FWIW, your understanding about how the features are represented in vl_feat is spot on. These are stored in da and db. Each column represents a descriptor of a particular patch in the image, and it is a histogram of 128 entries, so there are 128 rows per feature.
Now, as an added bonus, if you want to display how each feature from one image matches to another image, you can do the following:
%// Spawn a new figure and show the two images side by side
figure;
imagesc(cat(2, Ia, Ib));
%// Extract the (x,y) co-ordinates of each best matched feature
xa = fa(1,bestMatches(1,:));
%// CAUTION - Note that we offset the x co-ordinates of the
%// second image by the width of the first image, as the second
%// image is now beside the first image.
xb = fb(1,bestMatches(2,:)) + size(Ia,2);
ya = fa(2,bestMatches(1,:));
yb = fb(2,bestMatches(2,:));
%// Draw lines between each feature
hold on;
h = line([xa; xb], [ya; yb]);
set(h,'linewidth', 1, 'color', 'b');
%// Use VL_FEAT method to show the actual features
%// themselves on top of the lines
vl_plotframe(fa(:,bestMatches(1,:)));
fb2 = fb; %// Make a copy so we don't mutate the original
fb2(1,:) = fb2(1,:) + size(Ia,2); %// Remember to offset like we did before
vl_plotframe(fb2(:,bestMatches(2,:)));
axis image off; %// Take out the axes for better display
I have a large image matrix 125x200x3, the image has many large areas of black so there are many rows of all 0's. I want to remove all these black areas completely. I know that I should be using all(m==0,3) but it seems like I don't quite understand how to use it with 3d matrix.
m(all(m==0,3),:,:)=[]
exceeds matrix...
Any help is appreciated!
If you want to remove rows containing all black, do this:
m(all(all(m == 0,3),2),:,:) = [];
The inner call to ALL (what you were doing) will give you a 125-by-200 logical matrix with ones for every black pixel. The outer call to ALL operates across dimension 2 (the columns), giving you a logical vector with ones for rows that contain all black. This is what you then use as your index to remove rows.