I am using Cimg library
I have a image: CImg image;
And CImgDisplay main_disp;
I want to display histogram of image on main_disp
Here is the CImg documentation:
http://cimg.sourceforge.net/reference/index.html
And, here's some code that computes a histogram from an image and displays it on a pre-existing display:
// create display and image
CImg< float > image;
CImgDisplay< float > main_disp;
// do some stuff to populate image ...
// compute the histogram
const CImg< float > histogram = img.histogram( numberOfBins );
// display histogram on display
main_disp.display( histogram );
// wait until display is closed
while ( !main_disp.is_closed() ) { main_disp.wait(); }
Related
Based on this post: Converting image grayscale pixel values to alpha values , how could I change an image transparency based on grayscale values with Pillow (6.2.2)?
I would like the brighter a pixel, the more transparent it is. Thus, pixels that are black or close to black would not be transparent.
I found the following script that works fine for white pixels but I don't know how to modify it on order to manage grayscale values. Maybe there is a better or faster way, I'm a real newbie in Python.
from PIL import Image
img = Image.open('Image.jpg')
img_out = img.convert("RGBA")
datas = img.getdata()
target_color = (255, 255, 255)
newData = list()
for item in datas:
newData.append((
item[0], item[1], item[2],
max(
abs(item[0] - target_color[0]),
abs(item[1] - target_color[1]),
abs(item[2] - target_color[2]),
)
))
img_out.putdata(newData)
img_out.save('ConvertedImage', 'PNG')
This is what I finally did:
from PIL import Image, ImageOps
img = Image.open('Image.jpg')
img = img.convert('RGBA') # RGBA = RGB + alpha
mask = ImageOps.invert(img.convert('L')) # 8-bit grey
img.putalpha(mask)
img.save('ConvertedImage', 'PNG')
I am working on Simulink to develop my algorithm.
I have a video stream with dimensions 640x360. I am trying to extract region of interest (ROI) from each frame. However, my video turns into grayscale when I use the following code:
MATLAB Function Block which I am using for the ROI extraction:
function y = fcn(u)
%some more code
width = 639;
height = 210;
top = 150;
left = 1;
y = u(top:top+height, left:left+width);
Solution
Change the last line as follows:
y = u(top:top+height, left:left+width,:);
Explanation
The dimensions of an RGB image are actually mxnx3. The m and n are the image height and width, and there are 3 channels: red, green and blue.
when you perform a cropping of an RGB image, it should be performed on each channel separately. You can achieve that by using my code example above.
I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here
I have a retinal fundus image which has a white border along the corners. I am trying to remove the borders on all four sides of the image. This is a pre-processing step and my image looks like this:
fundus http://snag.gy/XLGkC.jpg
It is an RGB image, and I took the green channel, and created a mask using logical indexing. I searched for pixels which were all black in the image, and eroded the mask to remove the white edge pixels. However, I am not sure how to retrieve the final image, without the white pixel border using the mask that I have. This is my code, and any help would be appreciated:
maskIdx = rgb(:,:,2) == 0; # rgb is the original image
se = strel('disk',3); # erode 3-pixel using a disk structuring element
im2 = imerode(maskIdx, se);
newrgb = rgb(im2); # gives a vector - not the same size as original im
Solved it myself. This is what I did with some help.
I first computed the mask for all three color channels combined. This is because the mask for each channel is not the same when applied to all the three channels individually, and residual pixels will be left in the final image if I used only the mask from one of the channels in the original image:
mask = (rgb(:,:,1) == 0) & (rgb(:,:,2) == 0) & (rgb(:,:,3) == 0);
Next, I used a disk structuring element with a radius of 9 pixels to dilate my mask:
se = strel('disk', 9);
maskIdx = imdilate(mask,se);
EDIT: A structuring element which is arbitrary can also be used. I used: se = strel(ones(9,9))
Then, with the new mask, I multiplied the original image with the new dilated mask:
newImg(:,:,1) = rgb(:,:,1) .* uint8(maskIdx); # image was of double data-type
newImg(:,:,2) = rgb(:,:,2) .* uint8(maskIdx);
newImg(:,:,3) = rgb(:,:,3) .* uint8(maskIdx);
Finally, I subtracted the computed color-mask from the original image to get my desired border-removed image:
finalImg = rgb - newImg;
Result:
image http://snag.gy/g2X1v.jpg
I have a binary image lu and when I rotate the image the size of the image lu changes but i need to preserve the size of the image :
m=2048;
n=3072;
ODcenter =1.0e+03 *[2.0345 0.9985]
OD=ODcenter ;
X=zeros(m,n); %% m,n is size of image
t = 0:.1:2*pi;
ODradius = norm(ODcenter(2) - ODcenter(1)) / 2;
xm2 = round(2*ODradius*cos(t)+OD(1));
ym2 = round(2*ODradius*sin(t)+OD(2));
imCircleAlphaData2 = roipoly(X,xm2,ym2);
figure; imshow(imCircleAlphaData2);
lu=imCircleAlphaData2;
mask1 = true(size(lu)); %# Create a matrix of true values the same size
mask1(ODcenter(2):end,:) = false; %# Set the lower half to false
lu(~mask1) = 0; %# Set all elements in lu corresponding to mask 1==0
mask2 = true(size(lu));
mask2(:,ODcenter(1):end) = false; %# Set the right of the upper half to false
lu(~mask2) = 0; %# Set all elements in lu corresponding mask 2==0
figure;
imshow(lu); % shows left upper
lurot= imrotate(lu,45);
figure,imshow(lurot)
Size of lurot and lu is different . How can I preserve the size of image even if some part of image will be cropped after rotation
Basically, you have two options with Matlab imrotate:
Use crop which will make the output image the same size as the input image, cropping the rotated image to make it fit
Use loose which will make the output image large enough to contain the entire original rotated image. Generally, this will make the output image larger than the input image.
lurot= imrotate(lu,45,'nearest','crop');