I would like to add text in an image using Scilab; at first I wanted to use SIVP imshow, but it turns out this function does not return a handle. IPD's ShowImage on the other hand does return a handle, so I thought I could just do :
sceneImgFigure = ShowColorImage(sceneImg,"Scene");
for k=1:size(inspectedScene)
uicontrol(sceneImgFigure, ...
"style", "text", ...
"string", mtlb_num2str(inspectedScene(k).alocated_label), ...
"position", [inspectionModel(k).centroid(1) inspectionModel(k).centroid(2) 20 20], ...
"fontsize",15, ...
"BackgroundColor",[0.9,0.9,0.9]);
end
But using uicontrol I use graphic coordinates, not image coordinates, which result in the text being displayed at the wrong place. Beside, ShowImage crop the image. Here what I get :
I can't find any relevant answer on Scilab's help, so I'm kind of stuck here. There is a way to do what I want in Matlab, but the code seems to be impossible to translate to Scilab (no text nor getframe function in Scilab, to begin with...).
Any idea ?
I use xstring to put annotations onto plots according to the plot coordinate system. Depending on the format of your base image you might be able to use imageplot (from the SIVP I think) to draw the image, in which I believe image pixels map to plot coordinates.
xstring(inspectionModel(k).centroid(1), inspectionModel(k).centroid(2), mtlb_num2str(inspectedScene(k).alocated_label))
If you can't use imgplot you may have to manually scale all the coordinates. It isn't as bad as it sounds - if you know the size of your image you can work out scale factors for your coordinate system. I did something like this so I could put axes onto an imageplot when doing spectrograms using the wavelet toolbox.
Here is what I did to solve this (just in case it may be someday useful to someone) :
ShowColorImage(sceneImg,"Scene");
for i=1:size(inspectedScene)
xstring(inspectedScene(i).centroid(1)-5, ...
size(classDispImg,1)-inspectedScene(i).centroid(2)-5, ...
mtlb_num2str(inspectedScene(i).alocated_label));
e = gce();
e.font_size = 1;
e.font_foreground = color(0,0,0);
end
Which gives :
I got part of the solution from Scilab users mailing list. As said by xenoclast, I figured I had to scale the coordinate in y, using image height.
Related
I have a binary image that has two connected components. Both are fairly horizontal and one is on the top of the image and the other at the bottom. What I need to do is to extract only the top component which I want to do (or at least what I think is a good method) by taking the component with the lowest y value for the centroid (because MATLAB uses Java to show images, so the origin is at the top left) and erasing the other component. So far I've been able to use regionprops to find which region has the lowest y value for the centroid, but from there I'm not sure how to get a binary image back again with the component I want.
I've read in the documentation that bwconncomp, labelmatrix, and ismember are useful, but I'm not very sure how to use them well (or at all very much).
This is the solution I just made up, but if there's a better or more elegant one I'd love to know about it!
P.S. filtered is my image
connComp = bwconncomp(filtered);
props = regionprops(filtered, 'Centroid');
justTop = zeros(size(filtered,1), size(filtered,2));
if props(1).Centroid(2) > props(2).Centroid(2)
justTop(connComp.PixelIdxList{2}) = 1;
else
justTop(connComp.PixelIdxList{1}) = 1;
end`
I need to count the number of chalks on image with MatLab. I tried to convert my image to grayscale image and than allocate borders. Also I tried to convert my image to binary image and do different morphological operations with it, but I didn't get desired result. May be I did something wrong. Please help me!
My image:
You can use the fact that chalk is colorful and the separators are gray. Use rgb2hsv to convert the image to HSV color space, and take the saturation component. Threshold that, and then try using morphology to separate the chalk pieces.
This is also not a full solution, but hopefully it can provide a starting point for you or someone else.
Like Dima I noticed the chalk is brightly colored while the dividers are almost gray. I thought you could try and isolate gray pixels (where a gray pixel says red=blue=green) and go from there. I tried applying filters and doing morphological operations but couldn't find something satisfactory. still, I hope this helps
mim = imread('http://i.stack.imgur.com/RWBDS.jpg');
%we average all 3 color channels (note this isn't exactly equivalent to
%rgb2gray)
grayscale = uint8(mean(mim,3));
%now we say if all channels (r,g,b) are within some threshold of one another
%(there's probabaly a better way to do this)
my_gray_thresh=25;
graymask = (abs(mim(:,:,1) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,2) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,3) - grayscale) < my_gray_thresh);
figure(1)
imshow(graymask);
Ok so I spent a little time working on this- but unfortunately I'm out of time today and I apologize for the incomplete answer, but maybe this will get you started- (if you need more help, I'll edit this post over the weekend to give you a more complete answer :))
Here's the code-
for i=1:3
I = RWBDS(:,:,i);
se = strel('rectangle', [265,50]);
Io = imopen(I, se);
Ie = imerode(I, se);
Iobr = imreconstruct(Ie, I);
Iobrd = imdilate(Iobr, se);
Iobrcbr = imreconstruct(imcomplement(Iobrd), imcomplement(Iobr));
Iobrcbr = imcomplement(Iobrcbr);
Iobrcbrm = imregionalmax(Iobrcbr);
se2 = strel('rectangle', [150,50]);
Io2 = imerode(Iobrcbrm, se2);
Ie2 = imdilate(Io2, se2);
fgm{i} = Ie2;
end
fgm_final = fgm{1}+fgm{2}+fgm{3};
figure, imagesc(fgm_final);
It does still pick up the edges on the side of the image, but from here you're going to use connected bwconnectedcomponents, and you'll get the lengths of the major and minor axes, and by looking at the ratios of the objects it will get rid those.
Anyways good luck!
EDIT:
I played with the code a tiny bit more, and updated the code above with the new results. In cases when I was able to get rid of the side "noise" it also got rid of the side chalks. I figured I'd just leave both in.
What I did: In most cases a conversion to HSV color space is the way to go, but as shown by #rayryeng this is not the way to go here. Hue works really well when there is one type of color- if for example all chalks were red. (Logically you would think that going with the color channel would be better though, but this is no the case.) In this case, however, the only thing all chalks have in common is the relative shape. My solution basically used this concept by setting the structuring element se to something of the basic shape and ratio of the chalk and performing morphological operations- as you originally guessed was the way to go.
For more details, I suggest you read matlab's documentation on these specific functions.
And I'll let you figure out how to get the last chalk based on what I've given you :)
Given a Simulink block diagram (model), I would like to produce a 'Screenshot' to be used later in a LaTeX document. I want this screenshot to be PDF (vector graphic, -> pdflatex) with a tight bounding box, by that I mean no unneccessary white space around the diagram.
I have searched the net, searched stackexchange, searched the matlab doc. But no success so far. Some notes:
For figures, there are solutions to this question. I have a Simulink block diagram, it's different (see below).
I am aware of solutions using additional software like pdfcrop.
PDF seems to be the only driver that really produces vector graphics (R2013b on Win7 here). The EPS and PS output seems to have bitmaps inside. You zoom, you see it.
What I have tried:
1.
The default behaviour of print
modelName = 'vdp'; % example system
load_system(modelName); % load in background
% print to file as pdf and as jpeg
print(['-s',modelName],'-dpdf','pdfOutput1')
print(['-s',modelName],'-djpeg','jpegOutput1')
The JPEG looks good, tight bounding box. The PDF is centered on a page that looks like A4 or usletter. Not what I want.
2.
There are several parameters for printing block diagrams. See the Simulink reference page http://www.mathworks.com/help/simulink/slref/model-parameters.html. Let's extract some:
modelName = 'vdp'; % example system
load_system(modelName); % load in background
PaperPositionMode = get_param(modelName,'PaperPositionMode');
PaperUnits = get_param(modelName,'PaperUnits');
PaperPosition = get_param(modelName,'PaperPosition');
PaperSize = get_param(modelName,'PaperSize');
According to the documentation, PaperPosition contains a four element vector [left, bottom, width, height]. The last two elements specify the bounding box, the first two specify the distance of the lower left corner of the bounding box from the lower left corner of the paper.
Now when I print the PDF output and measure using a ruler, I find the values of both the bounding box and the position of its lower left corner are totally wrong (Yes, I have measured in PaperUnits). That's a real bummer. I could have calculated the margins to trim off the paper to be used later in \includegraphics[clip=true,trim=...]{pdfpage}.
3.
Of course what I initially wanted is a PDF that is already cropped. There is a solution for figures, it goes like this: You move the bounding box to the lower left corner of the paper and than change the paper size to the size of the bounding box.
oldPaperPosition = get_param(modelName,'PaperPosition');
set_param(modelName,'PaperPositionMode','manual');
set_param(modelName,'PaperPosition',[0 0 oldPaperPosition(3:4)]);
set_param(modelName,'PaperSize',oldPaperPosition(3:4));
For simulink models, there are two problems with this. PaperSize is a read-only parameter for models. And changing the PaperPosition has no effect at all on the output.
I'm running out of ideas, really.
EDIT ----------------------------------
Allright, to keep you updated: I talked to the Matlab support about this.
In R2013b, there are bugs causing wrong behaviour of PaperPositionMode and the bounding box from PaperPostion to be wrong.
There is no known way to extract the scale factor from print.
They suggested to go this way: Simulink --(print)--> SVG --(Inkscape)--> PDF. It works really good this way. The (correct) bounding box is an attribute of the svg node and the scale factor when exporting to SVG is always the same. Furthermore, Inkscape produces an already cropped PDF. So this approach solves all my problems, just you need Inkscape.
You can try export_fig to export your figures. WYSIWYG! This function is especially suited to exporting figures for use in publications and presentations, because of the high quality and portability of media produced.
Why you don't like to use pdfcrop?
My code works perfectly, and everything is inside Matlab:
function prints(name)
%%Prints Print current simulink model screen and save as eps and pdf
print('-s', '-depsc','-tiff', name)
print('-s', '-dpdf','-tiff', name)
dos(['pdfcrop ' name '.pdf ' name '.pdf &']);
end
You just have to invoke pdfcrop using "dos" command, and it's works fine!
on 2021a you have exportgraphics.
beatiful pdf images.
figure(3);
plot(Time.Data,wSOHO_KpKi.Data,'-',Time.Data,Demanded_Speed.Data,'--');
grid;
xlh = xlabel('$\mathrm{t\left [ s \right ]}$','interpreter','latex',"FontSize",15);
ylh = ylabel('$\mathrm{\omega _{m}\left [ rads/s \right ]}$','interpreter','latex',"FontSize",15);
xlh.Position(2) = xlh.Position(2) - abs(xlh.Position(2) * 0.05);
ylh.Position(1) = ylh.Position(1) - abs(ylh.Position(1) * 0.01);
exportgraphics(figure(3),'Grafico de Escalon Inicial velocidad estimada por algoritmo SOHO-KpKi.pdf');
I have an image containing several specific objects. I would like to detect the positions of those objects in this image. To do that I have some model images containing the objects I would like to detect. These images are well cropped around the object instance I want to detect.
Here is an example:
In this big image,
I would like to detect the object represented in this model image:
Since you originally posted this as a 'gimme-da-codez' question, showing absolutely no effort, I'm not going to give you the code. I will describe the approach in general terms, with hints along the way and it's up to you to figure out the exact code to do it.
Firstly, if you have a template, a larger image, and you want to find instances of that template in the image, always think of cross-correlation. The theory is the same whether you're processing 1D signals (called a matched filter in signal processing) or 2D images.
Cross-correlating an image with a known template gives you a peak wherever the template is an exact match. Look up the function normxcorr2 and understand the example in the documentation.
Once you find the peak, you'll have to account for the offset from the actual location in the original image. The offset is related to the fact that cross-correlating an N point signal with an M point signal results in an N + M -1 point output. This should be clear once you read up on cross-correlation, but you should also look at the example in the doc I mentioned above to get an idea.
Once you do these two, then the rest is trivial and just involves cosmetic dressing up of your result. Here's my result after detecting the object following the above.
Here's a few code hints to get you going. Fill in the rest wherever I have ...
%#read & convert the image
imgCol = imread('http://i.stack.imgur.com/tbnV9.jpg');
imgGray = rgb2gray(img);
obj = rgb2gray(imread('http://i.stack.imgur.com/GkYii.jpg'));
%# cross-correlate and find the offset
corr = normxcorr2(...);
[~,indx] = max(abs(corr(:))); %# Modify for multiple instances (generalize)
[yPeak, xPeak] = ind2sub(...);
corrOffset = [yPeak - ..., xPeak - ...];
%# create a mask
mask = zeros(size(...));
mask(...) = 1;
mask = imdilate(mask,ones(size(...)));
%# plot the above result
h1 = imshow(imgGray);
set(h1,'AlphaData',0.4)
hold on
h2 = imshow(imgCol);
set(h2,'AlphaData',mask)
Here is the answer that I was about to post when the question was closed. I guess it's similar to yoda's answer.
You can try to use normalized cross corelation:
im=rgb2gray(imread('di-5Y01.jpg'));
imObj=rgb2gray(imread('di-FNMJ.jpg'));
score = normxcorr2(imObj,im);
imagesc(score)
The result is: (As you can see, the whitest point corresponds to the position of your object.)
The Mathworks has a classic demo of image registration using the same technique as in #yoda's answer:
Registering an Image Using Normalized Cross-Correlation
I have a question about the values returned by getPosition. Below is my code. It lets the user set 10 points on a given image:
figure ,imshow(im);
colorArray=['y','m','c','r','g','b','w','k','y','m','c'];
pointArray = cell(1,10);
% Construct boundary constraint function
fcn = makeConstrainToRectFcn('impoint',get(gca,'XLim'),get(gca,'YLim'));
for i = 1:10
p = impoint(gca);
% Enforce boundary constraint function using setPositionConstraintFcn
setPositionConstraintFcn(p,fcn);
setColor(p,colorArray(1,i));
pointArray{i}=p;
getPosition(p)
end
When I start to set points on the image I get results like [675.000 538.000], which means that the x part of the coordinate is 675 and the y part is 538, right? This is what the MATLAB documentation says, but since the image is 576*120 (as displayed in the window) this is not logical.
It seemed to me like, somehow, getPosition returns the y coordinate first. I need some clarification on this.
Thanks for help
I just tried running your code in MATLAB 7.8.0 (R2009a) and had no problems with image sizes of either 576-by-120 or 120-by-576 (I was unsure which orientation you were using). If I left click inside the image, it places a new movable point. It did not allow me to place any points outside the image.
One small bug I found was that if you left-click in the image, then drag the mouse pointer outside the image while still holding the left button down, it will place the movable point outside the image and won't display it, displaying a set of coordinates that are not clipped to the axes rectangle.
I'm not sure of what could be your problem. Perhaps it's a bug with whatever MATLAB version you are using. I would suggest either restarting MATLAB, or clearing all variables from the workspace (except for the image data im).
Might be worth checking to see which renderer you are using (Painter or OpenGL), a colleague showed me some wierd behaviour with point picking when using the OpenGL renderer which went away when using the Painter renderer.
Your code uses the Image Processing Toolbox, which I don't have, so this is speculation. The coordinate system is probably set to the figure window (or maybe even the screen), not the image.
To test this, try clicking points outside the image to see if you can find the origin.