I am using openCV in objective C. I want to convert image into Black and White, I already done it, but my output Black and White image is not clear, there is black shade on image.
Any one can help?
- (IBAction)blackAndWhite:(id)sender {
imageView.image=orignalImage;
cv::Mat dst;
cv::Mat src=[self cvMatFromUIImage:imageView.image];
if( !src.data )
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
}
/// Convert to grayscale
cvtColor( src, src, CV_BGR2GRAY );
/// Apply Histogram Equalization
equalizeHist( src, dst );
imageView.image =[self UIImageFromCVMat:dst];
}
Thanks
orignal image :
Black and White image :
Firstly the image output you have is not a black and white (binary) image. It is a grayscale image.
This grayscale image is a single channel image with 256 colours (0-255), whereas binary images have only 0(black) or 255(white) in them.
You can use thresholding (cvThreshold) to convert it to binary from grayscale. There are many binarization algorithms available which can help you in achieving what you require as well. Local binarization methods are more adaptable and can help remove parts of the shaded regions as well.
Hope this helps.
Look into gamma correction. I assume that the gamma value needs to be adjusted in this case to suit the contrast of the lines in the image. Since it's a blurry image, you also find some trouble, I assume. You may also want to increase some of the contrast, while you're at it.
Links:
Understanding Gamma Correction
Wikipedia - Gamma Correction
OpenCV Gamma Correction (C++)
Changing contrast and brightness of an image - OpenCV
Well, I don't know why there are already 3 answers and none of them is correct.
Why do you apply image equalization after converting image to grayscale? The output without it is the following:
Here you can read about histogram equalization.
As already pointed out, you are not really binarizing your image but you are converting it to gray scale. You can binarize using cv::threshold but due to strong artefacts in illumination in your sample image, you'd better off with determining a local threshold value, using cv::adaptiveThreshold.
Here is a useful link that explains binarization in case of non uniform lighting conditions.
A sample program, in C++:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cstdlib>
int
main(int argc, char **argv)
{
cv::Mat image = cv::imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
assert(!image.empty());
cv::Mat binary;
cv::adaptiveThreshold(image, binary, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, 5, 7);
cv::imshow("Orig", image);
cv::imshow("Binary", binary);
cv::waitKey(0);
cv::imwrite("binary.png", binary);
return EXIT_SUCCESS;
}
Result:
Related
I am using Matlab's imread to read in images, but about half get read in as all-zeros, even though they are not all-black images (I can view them fine in Finder).
The images that fail vary in their:
file extension (PNG, JPG)
colorspace (RGB, Gray)
color profile (sRGB IEC61966-2.1, Calibrated RGB Colorspace, Generic Gray Gamma 2.2 Profile)
However, I have success reading in other PNG and JPG images in RGB and Gray colorspaces. I don't have any instances of successful reads of an sRGB IEC61966-2.1 color profile, although again, not all images that fail have this profile. I can't see any pattern of file extension, colorspace, etc. that distinguishes that fail from the ones that are read in successfully.
I have tried the following:
[img, map, alpha] = imread('fname.png');. In all cases this produces all-zero matrices for img, map, and alpha.
making the file extension explicit, e.g. imread('fname.png', 'png');. The result is the same.
I am running Matlab 2019b on macOS Catalina.
Any suggestions for what might be causing some images to fail and for how to import them successfully?
The images you linked contain an alpha transparency channel so simply reading using imread() will not return the image data. You need to read the image using additiona parameters as defined on the help page:
[imRGB, map, alpha] = imread('AcbK5pRoi.png');
where the imRGB will contain the RGB image and the Alpha will contain the transparency data.
You can use the imRGB variable as a normal image.
I want to substitute LSB of rgb image with character. I have done it for gray sacle image using bitset and bitget. Can I use same logic for rgb image?
Here is part of code which I have used:
stego=zeros(size(img))
stego=bitset(stego,1,lsp)
Where img is cover image and lsp is binary message. I have then merge bitplanes by:
stego=bitset(stego,2,bitget(img,2))
Up to 8th plane to get the strgo image.
How can I build the same logic for RGB image? Please guide me.
I have an YUV 420 (144x176) file from which I read first frame and converted its YUV components to RGB array int rgb[HEIGHT*WIDTH*3]; in which I store R1G1B1...RnGnBn and have an std::vector<unsigned char> image; image.resize(width * height * 4);. My question is:
When I use unsigned error = lodepng::encode(filename, image, width, height); it processes without errors and generates a PNG file, but this file is not even looks like an original image, I think that it uses RGBA while I only have RGB, how to fix it?
P.S. Don't know is this ^ information is enough, so tell me if no, please.
Okay, this question should be closed now.
All I've done is added 255 as Alpha in 4th position, something like this:
if (i%3==0 && i != 0)
image[k++] = 255;
image[k] = rgb[i];
P.S. Another important thing I missed is to put pixels at their real position when reading from YUV file, I mean like shown on Wiki's figure. Note this if you will have problems with YUV.
I'll do the medical image processing with CLAHE method (I use the code in http://www.mathworks.com/matlabcentral/fileexchange/22182-contrast-limited-adaptive-histogram-equalization-clahe/all_files ) and region growing ( http://www.mathworks.com/matlabcentral/fileexchange/19084-region-growing/content/regiongrowing.m )
that function can run if i use double data type for image. but converting image to double make its to be the binary image.
anyone know how to make image still in double but not to be a binary image?
If your image is img then do im2double(img). See im2double on the mathworks reference site.
If I've understood your comment correctly, you're trying to convert a binary image to a gray scale image. If so, this is not possible, as you've thrown away all the intensity information in lieu of a simple 0/1 image.
If your question was on how to convert a color/grayscale image to double, then LightningIsMyName has the answer for you. Here's a small example that you can play around with to see what you really want:
img=imread('peppers.png'); %#read in MATLAB's stock image
imgDouble=im2double(img); %#convert uint8 to double
imgGray=rgb2gray(img); %#convert RGB image to grayscale
imgGrayDouble=im2double(imgGray);%#convert grayscale image to double.
Here's how your color and grayscale images should look like:
I am taking a UIImage from Png file and feed it to the videoWriter.
avAdaptor appendPixelBuffer:pixelBuffer
When the end result video comes out, it seems to lacking the one color, missing the yellow color or something.
I take alook of the function that made the pixelbuffer out of the UIImage
CVPixelBufferCreateWithBytes(NULL,
myWidth,
myHeight,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
I also try the kCVPixelFormatType_32AGRB and others, it didn't help.
any thoughts?
Please verify if your PNG image is with or without transparency element. If your PNG image doesn't contain transparency then it's 24-bit and not 32-bit per pixel.
Also, have you tried kCVPixelFormatType_32RGBA ?
Maybe the image sizes do not fit together.
Your input image should have the same width and height like the video output. If "myWidth" or "myHeight" is different (i.e. different aspect ratio) to the size of the image, one byte may be lost at the end of the line, which could lead to color shifting. kCVPixelFormatType_32BGRA seems to be the preferred (fastest) pixel format, so this should be okay.
There is no yellow color in RGB colorspace. This means yellow is only the >red< and >green< components. It seems that >blue< is missing.
I assume you are using a CFDataRef (maybe NSData) for the image. If it is an NSData object you can print the bytes to the debug console using
NSLog(#"data: %#", image);
This will print a hex dump to the console. Here you can see if you have alpha and what kind of byte order your png is alike. If your image has alpha, every fourth byte should be the same number.