How to use Neighborhood-for using all the channels? - cimg

I have an RGB image in 2D.
I would want to create groups of pixels that have the same color (RGB value); they are read from left to right and then from top to bottom.
When the current pixel has an RGB value different from the previous, it means I found a group (which contains previous pixels).
I know there are the CImg functions CImg_for2x2(img,x,y,z,c,I,T) but the problem is that it works only on the channel c, whereas I'm interested in the RGB value. Doc: http://cimg.eu/reference/group__cimg__loops.html#lo6
Do you know if it's possible to tell "CImg" to understand that I work with RGB value and not only red's value, e.g.?

It's hard to tell from your question, but I think you are looking for "Connected Component Analysis", or "labelling".
The CImg tool for that is label().
So, if you start with this image which has 3 white blobs in it:
and then run this:
#include <iostream>
#include "CImg.h"
using namespace std;
using namespace cimg_library;
int main(int argc, char** const argv)
{
CImg<int> img("input.png");
img.label(0,0);
img.save_png("result.png");
}
It will "label" all distinct blobs in the image with a unique number, like this:

Related

Coloring by an unsigned char array

I have a data set with some points+vertices and an unsigned char array named "Colors" (on the Cell Data) where every tuple is (255,0,0) (i.e. indicating that all of the vertices should be red). In the Information tab, it looks as expected:
However, in the Properties tab, when I set Coloring to "Colors", I have to choose between Magnitude, X, Y, and Z, none of which are what I want. Instead, I want to use the actual vector to provide the RGB coloring.
Can anyone explain how to specify that these are actually colors and should be used directly?
After coming across this post, I learned that you have to uncheck "Map Scalars" in the Advanced section of Properties.
"Clicking the gear icon next to the search bar will show all properties
for the current source/representation and the map scalars/interpolate
scalars should be among them."

Extracting raw DICOM data from a DICOM file

I am working on a 3D DICOM file. After it is read (using MATLAB, for example) I see that it contains some text information apart from the actual scan image. I mean text which is visible in the image when I do implay(), not the header text in the DICOM file. Is there any way by which I can load only the raw data without the text? The text is hindering my processing.
EDIT: I cannot share the image I'm working on due to it being proprietary, but I found the following image after googling:
http://www.microsoft.com/casestudies/resources/Images/4000010832/image7.jpeg http://www.microsoft.com/casestudies/resources/Images/4000010832/image7.jpeg
Notice how the text on the left side partially overlaps the image? There is a similar effect in the image I'm working on. I need just the conical scan image for processing.
As noted, you need to provide more information, as there are a number of ways the overlay can be added: if it's burned into the image, you're generally out of luck; if it's in the overlay plane module (the 60xx tag group), you can probably just remove those prior to passing into Matlab; if it's stored in the unused high bit (an old but common method), you'll have to use the overlay bit position (60xx,0102) to clear out the data in the pixel data.
For the last one, something like the Matlab equivilent of this Java code:
int position = object.getInt( Tag.OverlayBitPosition, 0 );
if( position == 0 ) return;
// Remove the overlay data in high-bit specified.
//
int bit = 1 << position;
int[] pixels = object.getInts( Tag.PixelData );
int count = 0;
for( int pix : pixels )
{
int overlay = pix & bit;
pixels[ count++ ] = pix - overlay;
}
object.putInts( Tag.PixelData, VR.OW, pixels );
If you refer to the text in the blue area on top of the image, these contents are burned into the image itself.
The only solution to remove that is to apply a mask to this area of the image.
Be careful, because doing this is a modification of the original DICOM image. Such kind of modifications are not allowed in some scenarios.

LodePng 24bit RGB mode can't understand how to choose

I have an YUV 420 (144x176) file from which I read first frame and converted its YUV components to RGB array int rgb[HEIGHT*WIDTH*3]; in which I store R1G1B1...RnGnBn and have an std::vector<unsigned char> image; image.resize(width * height * 4);. My question is:
When I use unsigned error = lodepng::encode(filename, image, width, height); it processes without errors and generates a PNG file, but this file is not even looks like an original image, I think that it uses RGBA while I only have RGB, how to fix it?
P.S. Don't know is this ^ information is enough, so tell me if no, please.
Okay, this question should be closed now.
All I've done is added 255 as Alpha in 4th position, something like this:
if (i%3==0 && i != 0)
image[k++] = 255;
image[k] = rgb[i];
P.S. Another important thing I missed is to put pixels at their real position when reading from YUV file, I mean like shown on Wiki's figure. Note this if you will have problems with YUV.

Open CV image effects

I am using openCV in objective C. I want to convert image into Black and White, I already done it, but my output Black and White image is not clear, there is black shade on image.
Any one can help?
- (IBAction)blackAndWhite:(id)sender {
imageView.image=orignalImage;
cv::Mat dst;
cv::Mat src=[self cvMatFromUIImage:imageView.image];
if( !src.data )
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
}
/// Convert to grayscale
cvtColor( src, src, CV_BGR2GRAY );
/// Apply Histogram Equalization
equalizeHist( src, dst );
imageView.image =[self UIImageFromCVMat:dst];
}
Thanks
orignal image :
Black and White image :
Firstly the image output you have is not a black and white (binary) image. It is a grayscale image.
This grayscale image is a single channel image with 256 colours (0-255), whereas binary images have only 0(black) or 255(white) in them.
You can use thresholding (cvThreshold) to convert it to binary from grayscale. There are many binarization algorithms available which can help you in achieving what you require as well. Local binarization methods are more adaptable and can help remove parts of the shaded regions as well.
Hope this helps.
Look into gamma correction. I assume that the gamma value needs to be adjusted in this case to suit the contrast of the lines in the image. Since it's a blurry image, you also find some trouble, I assume. You may also want to increase some of the contrast, while you're at it.
Links:
Understanding Gamma Correction
Wikipedia - Gamma Correction
OpenCV Gamma Correction (C++)
Changing contrast and brightness of an image - OpenCV
Well, I don't know why there are already 3 answers and none of them is correct.
Why do you apply image equalization after converting image to grayscale? The output without it is the following:
Here you can read about histogram equalization.
As already pointed out, you are not really binarizing your image but you are converting it to gray scale. You can binarize using cv::threshold but due to strong artefacts in illumination in your sample image, you'd better off with determining a local threshold value, using cv::adaptiveThreshold.
Here is a useful link that explains binarization in case of non uniform lighting conditions.
A sample program, in C++:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cstdlib>
int
main(int argc, char **argv)
{
cv::Mat image = cv::imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
assert(!image.empty());
cv::Mat binary;
cv::adaptiveThreshold(image, binary, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, 5, 7);
cv::imshow("Orig", image);
cv::imshow("Binary", binary);
cv::waitKey(0);
cv::imwrite("binary.png", binary);
return EXIT_SUCCESS;
}
Result:

iOS 256 Colors (VGA) to RGB

I'd like to convert a VGA color (256 colors; 8 bit) to a RGB color on iOS.
Is it possible to compute this or do I have to use color tables (using CGColorSpaceCreateIndexed).
UIColor does not support 256 Colors.
Thanks :)
Somewhere, the title you're porting should have set the palette. On the VGA, the 256 colours are mapped through a table that the programmer has previously set to convert them into 18 bit RGB colour (at a uniform 6 bits per channel). If you're running the original title through emulation then watch for writes to ports 0x3c6, 0x3c8 and 0x3c9 or calls to the BIOS via int 10h, with ax = 0x1010 (to set a single colour) or 0x1012 (to set a range). If you have the original code, obviously look for the source of the palette table.
In drawing terms, you can keep the palette yourself, for example as a C-style array of 256 CGColorRefs, or use CGColorSpaceCreateIndexed as you suggest (ignore Apple's slight documentation error; the colour table can contain up to 256 entries, not up to 255) probably with a bitmap context to just pass your buffer off to CoreGraphics and forget about it.
I expect the remapping will be performed on the CPU, so if that gets a bit too costly then consider using GL ES 2.x and writing a suitable pixel shader — you'd upload your actual image as, say, a luminance (ie, single channel) texture, plus a 256x1 texture where the colour at each spot is a palette entry, then write a shader that reads from the first texture for the current texture coordinates and uses that value to index the second.