I am trying to split a region in an image into left and right. But I am avoiding a certain percentage of columns in the center from each side.
So,
I have to get the keep indexes for both left and right.
I am using fliplr to reverse array indexes of right side,
get (1:n_indices),
then again fliplr back to normal.
Can I avoid fliplr in the below code:
img1 = imread('sample4.png');
keepPercent = 0.9; %90 on both sides
columnsWithAllZeros = all(img1 == 0);
left_idx = find(~columnsWithAllZeros,1,'first');
right_idx = find(~columnsWithAllZeros,1,'last');
cent_idx = floor(mean([left_idx,right_idx]));
left_to_cent_idxs = left_idx:cent_idx;
cent_to_right_idxs = cent_idx+1:right_idx;
cent_to_right_idxs = fliplr(cent_to_right_idxs); % flip
num_leftKeep_idxs = floor(keepPercent *length(left_to_cent_idxs));
num_rightKeep_idxs = floor(keepPercent *length(cent_to_right_idxs));
right_keepImg_idxs = left_to_cent_idxs(1:num_leftKeep_idxs);
left_keepImg_idxs = cent_to_right_idxs(1:num_rightKeep_idxs);
left_keepImg_idxs = fliplr(left_keepImg_idxs); %flip back This is not needed I Know
[leftBrain_img, rightBrain_img] = deal(zeros(nrow, ncol, 'logical'));
leftBrain_img(:,left_keepImg_idxs) = img1(:,left_keepImg_idxs);
rightBrain_img(:,right_keepImg_idxs) = img1(:,right_keepImg_idxs);
rightBrain_img = cast(rightBrain_img,'uint16') .*img1;
leftBrain_img = cast(leftBrain_img,'uint16') .*img1;
figure,
subplot(131), imshow(img1,[])
subplot(132), imshow(rightBrain_img,[])
subplot(133), imshow(leftBrain_img,[])
The sample image is available here
Thanks,
Gopi
That could be done, just like #rahnema1 said. But the question is why even do it when it could be done in a much faster & simpler way!
Have a look at this code-
img1 = imread('sample4.png');
keepPercent = 0.9; %90 on both sides
columnsWithAllZeros = all(img1 == 0);
leavepercent=1-keepPercent;
idx=minmax(find(columnsWithAllZeros==0));
cent_idx = floor(mean(idx));
left_keepImg_idxs1=idx(1):cent_idx-floor(leavepercent*(cent_idx-idx(1)+1));
right_keepImg_idxs1=cent_idx+1+floor(leavepercent*(idx(2)-cent_idx+1)):idx(2);
[leftBrain_img, rightBrain_img] =deal(zeros(512, 512, 'logical'));
leftBrain_img(:,left_keepImg_idxs1) = img1(:,left_keepImg_idxs1);
rightBrain_img(:,right_keepImg_idxs1) = img1(:,right_keepImg_idxs1);
rightBrain_img = cast(rightBrain_img,'uint16') .*img1;
leftBrain_img = cast(leftBrain_img,'uint16') .*img1;
figure,
subplot(131), imshow(img1,[])
subplot(132), imshow(rightBrain_img,[])
subplot(133), imshow(leftBrain_img,[])
Related
I have a vector shapefile which is in unit of 'Meter' presenting boundary of overall Germany. I am converting it into raster format based on each pixel representing 300 Meters respectively. After conversion I accessed inmage information using imfinfo() in matlab. However the result is giving me the unit value is in "Inches" I am quite confused at the moment and do not know what to do to convert inches to meters as a pixel size unit. Would you please give me some idea?
`% Code
R6 = shaperead('B6c.shp');
%Nord
XN6 = double(R6(4).X); YN6 = double(R6(4).Y);
XN6min = min(XN6(XN6>0)); XNmax = max(XN6);
YN6min = min(YN6(YN6>0)); YNmax = max(YN6);
%Bayern
XB6 = double(R6(7).X); YB6 = double(R6(7).Y);
XB6min = min(XB6(XB6>0)); XB6max = max(XB6);
YB6min = min(YB6(YB6>0)); YB6max = max(YB6);
%Schleswig-Holstein
XSH6 = double(R6(9).X); YSH6 = double(R6(9).Y);
XSH6min = min(XSH6(XSH6>0)); XSH6max = max(XSH6);
YSH6min = min(YSH6(YSH6>0)); YSH6max = max(YSH6);
%Sachsen
XS6 = double(R6(6).X); YS6 = double(R6(6).Y);
XS6min = min(XS6(XS6>0)); XS6max = max(XS6);
YS6min = min(YS6(YS6>0)); YS6max = max(YS6);
dx = round(XS6max-XN6min);
dy = round(YSH6max-YB6min);
M = round((dx)/300);enter code here N = round((dy)/300);
A6 = zeros(M,N); %initiating image matrix based on 4 limiting States
%transformation from world to pixel coordinates
xpix_bw =(((XBW-XN6min)*M)/dx)';
ypix_bw =(((YBW-YB6min)*N)/dy)';
xbw6=round(xpix_bw); xbw6=xbw6(~isnan(xbw6));
ybw6=round(ypix_bw); ybw6=ybw6(~isnan(ybw6));
%line drawing
for i=1:1:length(xbw6)-1
j=i+1;
x1=xbw6(i); x2=xbw6(j); y1=ybw6(i); y2=ybw6(j);
nn=atan2((y2-y1),(x2-x1)); % azimuthal angle
if x2==x1
l=abs(y2-y1);
else
l = round((x2-x1)/cos(nn)); % horizontal distance
end
xx=zeros(l,1); %empty column
yy=zeros(l,1); %empty column
% creating line along slope distance
for i=1:1:l
xx(i)=round(x1+cos(nn)*i);
yy(i)=round(y1+sin(nn)*i);
A6(xx(i)+1,yy(i)+1) = 256;
end
end
imwrite(A6, 'Untitled_0506_300.tif','Resolution', 300);`
I have the following image and I would like to segment the rectangular object in the middle. I implemented the following code to segment but I cannot isolate the object. What functions or approaches can I take to isolate the rectangular object in the image?
im = imread('image.jpg');
% convert image to grayscale,
imHSV = rgb2hsv(im);
imGray = rgb2gray(im);
imSat = imHSV(:,:,2);
imHue = imHSV(:,:,1);
imVal = imHSV(:,:,3);
background = imopen(im,strel('disk',15));
I2 = im - background;
% detect edge using sobel algorithm
[~, threshold] = edge(imGray, 'sobel');
fudgeFactor = .5;
imEdge = edge(imGray,'sobel', threshold * fudgeFactor);
%figure, imshow(imEdge);
% split image into colour channels
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
% convert image to binary image (using thresholding)
imBlobs = and((imSat < 0.6),(imHue < 0.6));
imBlobs = and(imBlobs, ((redIM + greenIM + blueIM) > 150));
imBlobs = imfill(~imBlobs,4);
imBlobs = bwareaopen(imBlobs,50);
figure,imshow(imBlobs);
In this example, you can leverage the fact that the rectangle contains blue in all of its corners in order to build a good initial mask.
Use threshold in order to locate the blue locations in the image and create an initial mask.
Given this initial mask, find its corners using min and max operations.
Connect between the corners with lines in order to receive a rectangle.
Fill the rectangle using imfill.
Code example:
% convert image to binary image (using thresholding)
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
mask = blueIM > redIM*2 & blueIM > greenIM*2;
%noise cleaning
mask = imopen(mask,strel('disk',3));
%find the corners of the rectangle
[Y, X] = ind2sub(size(mask),find(mask));
minYCoords = find(Y==min(Y));
maxYCoords = find(Y==max(Y));
minXCoords = find(X==min(X));
maxXCoords = find(X==max(X));
%top corners
topRightInd = find(X(minYCoords)==max(X(minYCoords)),1,'last');
topLeftInd = find(Y(minXCoords)==min(Y(minXCoords)),1,'last');
p1 = [Y(minYCoords(topRightInd)) X((minYCoords(topRightInd)))];
p2 = [Y(minXCoords(topLeftInd)) X((minXCoords(topLeftInd)))];
%bottom corners
bottomRightInd = find(Y(maxXCoords)==max(Y(maxXCoords)),1,'last');
bottomLeftInd = find(X(minYCoords)==min(X(minYCoords)),1,'last');
p3 = [Y(maxXCoords(bottomRightInd)) X((maxXCoords(bottomRightInd)))];
p4 = [Y(maxYCoords(bottomLeftInd)) X((maxYCoords(bottomLeftInd)))];
%connect between the corners with lines
l1Inds = drawline(p1,p2,size(mask));
l2Inds = drawline(p3,p4,size(mask));
maskOut = mask;
maskOut([l1Inds,l2Inds]) = 1;
%fill the rectangle which was created
midP = ceil((p1+p2+p3+p4)./4);
maskOut = imfill(maskOut,midP);
%present the final result
figure,imshow(maskOut);
Final Result:
Intermediate results (1-after threshold taking, 2-after adding lines):
*drawline function is taken from drawline webpage
I have problem with optical flow if the frame size have been manipulated in any way this gives me error. There are two options either change the resolution of the video at the beginning or somehow how change the frame size in a way that optical flow will work. I will want to add a cascade object to detect nose, mouth and eyes in further development therefore I need solution that will work for individual regions without necessary setting optical flow individually for those regions especially that a bounding box does not have a fixed size and it will displace itself slightly from frame to frame. Here is my code so far, the error is that it is exceeding matrix dimensions.
faceDetector = vision.CascadeObjectDetector();
vidObj = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
converter = vision.ImageDataTypeConverter;
opticalFlow = vision.OpticalFlow('ReferenceFrameDelay', 1);
opticalFlow.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom','CustomBorderColor', 255);
vidPlayer = vision.VideoPlayer('Name','Motion Vector');
while ~isDone(vidObj);
frame = step(vidObj);
fraRes = imresize(frame,0.5);
fbbox = step(faceDetector,fraRes);
I = imcrop(fraRes,fbbox);
im = step(converter,I);
of = step(opticalFlow,im);
lines = videooptflowlines(of, 20);
if ~isempty(lines)
out = step(shapeInserter,im,lines);
step(vidPlayer,out);
end
end
release(vidPlayer);
release(VidObj);
UPDATE: I went and edited the function for optical flow which creates lines and this sorts out the some size issues however it is necessary to to input this manually for each object (so if there is any other way let me know). I think the best solution would be set a fixed size to cascadeObjectDetector, does anyone know how to do this? Or have any other idea?
faceDetector = vision.CascadeObjectDetector(); %I need fixed size for this
faceDetector.MinSize = [150 150];
vidRead = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
convert = vision.ImageDataTypeConverter;
optFlo = vision.OpticalFlow('ReferenceFrameDelay', 1);
optFlo.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
while ~isDone(vidRead)
frame = step(vidRead);
fraRes = imresize(frame,0.3);
fraSin = im2single(fraRes);
bbox = step(faceDetector,fraSin);
I = imcrop(fraSin, bbox);
im = step(convert, I);
release(optFlo);
of = step(optFlo, im);
lines = optfloo(of, 50); %use videooptflowlines instead of (optfloo)
out = step(shapeInserter, im, lines);
imshow(out);
end
Does bokeh have a simple way to plot the colorbar for a heatmap?
In this example it would be a strip illustrating how colors correspond to values.
In matlab, its called a 'colorbar' and looks like this:
UPDATE: This is now much easier: see
http://docs.bokeh.org/en/latest/docs/user_guide/annotations.html#color-bars
I'm afraid I don't have a great answer, this should be easier in Bokeh. But I have done something like this manually before.
Because I often want these off my plot, I make a new plot, and then assemble it together with something like hplot or gridplot.
There is an example of this here: https://github.com/birdsarah/pycon_2015_bokeh_talk/blob/master/washmap/washmap/water_map.py#L179
In your case, the plot should be pretty straight forward. If you made a datasource like this:
| value | color
| 1 | blue
.....
| 9 | red
Then you could do something like:
legend = figure(tools=None)
legend.toolbar_location=None
legend.rect(x=0.5, y='value', fill_color='color', width=1, height=1, source=source)
layout = hplot(main, legend)
show(legend)
However, this does rely on you knowing the colors that your values correspond to. You can pass a palette to your heatmap chart call - as shown here: http://docs.bokeh.org/en/latest/docs/gallery/cat_heatmap_chart.html so then you would be able to use that to construct the new data source from that.
I'm pretty sure there's at least one open issue around color maps. I know I just added one for off-plot legends.
Since other answers here seem very complicated, here an easily understandable piece of code that generates a colorbar on a bokeh heatmap.
import numpy as np
from bokeh.plotting import figure, show
from bokeh.models import LinearColorMapper, BasicTicker, ColorBar
data = np.random.rand(10,10)
color_mapper = LinearColorMapper(palette="Viridis256", low=0, high=1)
plot = figure(x_range=(0,1), y_range=(0,1))
plot.image(image=[data], color_mapper=color_mapper,
dh=[1.0], dw=[1.0], x=[0], y=[0])
color_bar = ColorBar(color_mapper=color_mapper, ticker= BasicTicker(),
location=(0,0))
plot.add_layout(color_bar, 'right')
show(plot)
Since the 0.12.3 version Bokeh has the ColorBar.
This documentation was very useful to me:
http://docs.bokeh.org/en/dev/docs/user_guide/annotations.html#color-bars
To do this I did the same as #birdsarah. As an extra tip though if you use the rect method as your colour map, then use the rect method once again in the colour bar and use the same source. The end result is that you can select sections of the colour bar and it also selects in your plot.
Try it out:
http://simonbiggs.github.io/electronfactors
Here is some code loosely based on birdsarah's response for generating a colorbar:
def generate_colorbar(palette, low=0, high=15, plot_height = 100, plot_width = 500, orientation = 'h'):
y = np.linspace(low,high,len(palette))
dy = y[1]-y[0]
if orientation.lower()=='v':
fig = bp.figure(tools="", x_range = [0, 1], y_range = [low, high], plot_width = plot_width, plot_height=plot_height)
fig.toolbar_location=None
fig.xaxis.visible = None
fig.rect(x=0.5, y=y, color=palette, width=1, height = dy)
elif orientation.lower()=='h':
fig = bp.figure(tools="", y_range = [0, 1], x_range = [low, high],plot_width = plot_width, plot_height=plot_height)
fig.toolbar_location=None
fig.yaxis.visible = None
fig.rect(x=y, y=0.5, color=palette, width=dy, height = 1)
return fig
Also, if you are interested in emulating matplot lib colormaps, try using this:
import matplotlib as mpl
def return_bokeh_colormap(name):
cm = mpl.cm.get_cmap(name)
colormap = [rgb_to_hex(tuple((np.array(cm(x))*255).astype(np.int))) for x in range(0,cm.N)]
return colormap
def rgb_to_hex(rgb):
return '#%02x%02x%02x' % rgb[0:3]
This is high on my wish list as well. It would also need to automatically adjust the range if the plotted data changed (e.g. moving through one dimension of a 3D data set). The code below does something which people might find useful. The trick is to add an extra axis to the colourbar which you can control through a data source when the data changes.
import numpy
from bokeh.plotting import Figure
from bokeh.models import ColumnDataSource, Plot, LinearAxis
from bokeh.models.mappers import LinearColorMapper
from bokeh.models.ranges import Range1d
from bokeh.models.widgets import Slider
from bokeh.models.widgets.layouts import VBox
from bokeh.core.properties import Instance
from bokeh.palettes import RdYlBu11
from bokeh.io import curdoc
class Colourbar(VBox):
plot = Instance(Plot)
cbar = Instance(Plot)
power = Instance(Slider)
datasrc = Instance(ColumnDataSource)
cbarrange = Instance(ColumnDataSource)
cmap = Instance(LinearColorMapper)
def __init__(self):
self.__view_model__ = "VBox"
self.__subtype__ = "MyApp"
super(Colourbar,self).__init__()
numslices = 6
x = numpy.linspace(1,2,11)
y = numpy.linspace(2,4,21)
Z = numpy.ndarray([numslices,y.size,x.size])
for i in range(numslices):
for j in range(y.size):
for k in range(x.size):
Z[i,j,k] = (y[j]*x[k])**(i+1) + y[j]*x[k]
self.power = Slider(title = 'Power',name = 'Power',start = 1,end = numslices,step = 1,
value = round(numslices/2))
self.power.on_change('value',self.inputchange)
z = Z[self.power.value]
self.datasrc = ColumnDataSource(data={'x':x,'y':y,'z':[z],'Z':Z})
self.cmap = LinearColorMapper(palette = RdYlBu11)
r = Range1d(start = z.min(),end = z.max())
self.cbarrange = ColumnDataSource(data = {'range':[r]})
self.plot = Figure(title="Colourmap plot",x_axis_label = 'x',y_axis_label = 'y',
x_range = [x[0],x[-1]],y_range=[y[0],y[-1]],
plot_height = 500,plot_width = 500)
dx = x[1] - x[0]
dy = y[1] - y[0]
self.plot.image('z',source = self.datasrc,x = x[0]-dx/2, y = y[0]-dy/2,
dw = [x[-1]-x[0]+dx],dh = [y[-1]-y[0]+dy],
color_mapper = self.cmap)
self.generate_colorbar()
self.children.append(self.power)
self.children.append(self.plot)
self.children.append(self.cbar)
def generate_colorbar(self,cbarlength = 500,cbarwidth = 50):
pal = RdYlBu11
minVal = self.datasrc.data['z'][0].min()
maxVal = self.datasrc.data['z'][0].max()
vals = numpy.linspace(minVal,maxVal,len(pal))
self.cbar = Figure(tools = "",x_range = [minVal,maxVal],y_range = [0,1],
plot_width = cbarlength,plot_height = cbarwidth)
self.cbar.toolbar_location = None
self.cbar.min_border_left = 10
self.cbar.min_border_right = 10
self.cbar.min_border_top = 0
self.cbar.min_border_bottom = 0
self.cbar.xaxis.visible = None
self.cbar.yaxis.visible = None
self.cbar.extra_x_ranges = {'xrange':self.cbarrange.data['range'][0]}
self.cbar.add_layout(LinearAxis(x_range_name = 'xrange'),'below')
for r in self.cbar.renderers:
if type(r).__name__ == 'Grid':
r.grid_line_color = None
self.cbar.rect(x = vals,y = 0.5,color = pal,width = vals[1]-vals[0],height = 1)
def updatez(self):
data = self.datasrc.data
newdata = data
z = data['z']
z[0] = data['Z'][self.power.value - 1]
newdata['z'] = z
self.datasrc.trigger('data',data,newdata)
def updatecbar(self):
minVal = self.datasrc.data['z'][0].min()
maxVal = self.datasrc.data['z'][0].max()
self.cbarrange.data['range'][0].start = minVal
self.cbarrange.data['range'][0].end = maxVal
def inputchange(self,attrname,old,new):
self.updatez()
self.updatecbar()
curdoc().add_root(Colourbar())
I have the following HW assignment:
Go to the "saw" image. Do edge detection. Now, by convolution,
replace each edge point by a small circle or with a small Gaussian.
Which filter can I use to perform this operation?
Thank you!
saw_image = imread('saw.jpg');
I = rgb2gray(saw_image);
BW = edge(I,'canny');
[row, col] = find (BW);
a = sub2ind(size(I), row, col)';
WindowSize = 9;
newI=imfilter(I(a),fspecial('???',WindowSize));
Not exactly sure what is required.
I assume you should do something like:
saw_image = randi(255,30,30,3);
I = rgb2gray(saw_image);
BW = edge(I,'canny');
WindowSize = 3;
newI=imfilter(BW*255,fspecial('gaussian',WindowSize));
result = saw_image;
result(newI>0) = newI(newI>0);
This creates an edge image, convolutes this image and replaces all areas int he original image which are detected as edges with the edge values.