Get the pixel color from GWT canvas - gwt

I am working on GWT canvas which is similar to HTML 5 canvas. My goal is to get the pixel color from the canvas. I found one way to do this by using CanvasPixelArray which store the image data. Image data store the pixel information. I am using following code to get the pixel color from canvas :
CanvasPixelArray imageData = canvas.getRendererCanvas().getContext2d().getImageData(0, 0, canvas.getWidth(), canvas.getHeight()).getData();
int length = imageData.getLength() / 4;
int index = 0, r, g, b, a;
for (int i = 0; i < length; i++) {
index = 4 * i;
r = imageData.get(index); //red
g = imageData.get(++index); //green
b = imageData.get(++index); //blue
a = imageData.get(++index); //alpha
if (r == 255 || g == 255 || b == 255) { // pixel is white
System.out.println(r+","+g+","+b+","+a);
}
}
But the major problem is that it's working too slow and down the performance. This is the main issue otherwise above code working fine.
So what is the best way performance wise to get the color information from canvas.
Any help is highly appreciated. Thank you.

Related

How to calculate brightness from list of unsigned 8 bytes integer which represents an image in dart?

I wanted to calculate the brightness of an UintList image. The image I used are picked from my phone (Using image_picker plugin in flutter). I tried a for loop on every value of this list and did this:
int r = 0, b = 0, g = 0, count = 0;
for (int value in imageBytesList) {
/// The red channel of this color in an 8 bit value.
int red = (0x00ff0000 & value) >> 16;
/// The blue channel of this color in an 8 bit value.
int blue = (0x0000ff00 & value) >> 8;
/// The green channel of this color in an 8 bit value.
int green = (0x000000ff & value) >> 0;
r += red;
b += blue;
g += green;
count++;
}
double result = (r + b + g) / (count * 3);
I know that the result should represent a brightness level between 0 and 255, where 0 = totally black and 255 = totally bright. but, what I get are really weird values like 0.0016887266175341332. What calculation mistakes am I making? (I know my method is gravely wrong but I wasn't able to find a way).
The flutter image widget does convert this Uint8List from memory to an Image with correct height & width using Image.memory() constructor. What is the logic behind it?

3 channel depth image 1 channel

I have record a depth video using Kinect v2, when I extracted images using MATLAB then each image is 3 channel. Normally the images I saw are just 1 channel. Please any one tell me how can make this 3 channel image to 1 channel?
Here is the code of the depth part:
IplImage depth = new IplImage(512, 424, BitDepth.U16, 1);
CvVideoWriter DepthWriter;
Width = sensor.DepthFrameSource.FrameDescription.Width;
DHeight = sensor.DepthFrameSource.FrameDescription.Height;
WbDepth = new WriteableBitmap(DWidth, DHeight, 96, 96, PixelFormats.Gray16, null);
int depthshft = (int)SliderDepth.Value;
using (DepthFrame depthframe = frame.DepthFrameReference.AcquireFrame())
ushort* depthdata = (ushort*)depth.ImageData;
if (depthframe != null)
{
Depthdata = new ushort[DWidth * DHeight];
ushort[] Depthloc = new ushort[DWidth * DHeight];
depthframe.CopyFrameDataToArray(Depthdata);
for (int i = 0; i < DWidth * DHeight; i++)
{
Depthloc[i] = 0x1000;
}
colorspacePoint = new ColorSpacePoint[DWidth * DHeight];
depthspacePoint = new DepthSpacePoint[CWidth * CHeight];
sensor.CoordinateMapper.MapDepthFrameToColorSpace(Depthloc, colorspacePoint);
for (int y = 0; y < DHeight; y++)
{
for (int x = 0; x < DWidth; x++)
{
if (depthshft != 0)
{
Depthdata[y * DWidth + x] = (ushort)((Depthdata[y * DWidth + x]) << depthshft);
}
}
}
depth.CopyPixelData(Depthdata);
}
WbDepth.WritePixels(new Int32Rect(0, 0, DWidth, DHeight), Depthdata, strideDep, 0);
ImageDepth.Source = WbDepth;
if (depth != null && DepthWriter.FileName != null) Cv.WriteFrame(DepthWriter, depth);
Cv.ReleaseVideoWriter(DepthWriter);
if (CheckBox_saveD.IsChecked == true)
DepthWriter = new CvVideoWriter(string.Format("{1}\\Scene{0}_DepthRecord.avi", scene, TextBlock_saveloca.Text.ToString()), FourCC.Default, 30.0f, new CvSize(512, 424));
CheckBox_saveD.IsEnabled = false;
if (CheckBox_saveD.IsChecked == true) Cv.ReleaseVideoWriter(DepthWriter);
Thank you
Everyone so far is advising you to convert the (supposedly) color image to grayscale. I don't think you should do this.
The kinect gives you a "1 channel" image of depth values. If you have a color (3 channel) depth map, then something is wrong. Converting to gray scale will then make you lose depth information.
Instead, try to figure out why your image is loaded as gray scale in the first place. What is the source? Is the conversion maybe done by Matlab when reading the image? Can you then give it some flag to tell it not to?

Dealing with RGB of the image

I am developing one application . I have requirement like my application should detect particular color of the image . For e.g. if i want to detect the red color then i should see only the red areas of the image. I have tried this by following way :
psudocode :
First i have got the information of the each pixel of the image in the form of RGB using:
for(int i=0; i < width; ++i)
{
for (int j=0; j <height; j++)
{
byteIndex = (bytesPerRow * j) + i * bytesPerPixel;
CGFloat red = (rawData[byteIndex] * 1.0) /255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0)/255.0 ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0)/255.0 ;
if (red>0.9 && red<1)
{
rgba[byteIndex] =red*255;
rgba[byteIndex+1] =green*0 ;
rgba[byteIndex+2] = blue*0;
count++;
}
}
}
then i am assigning this information to another image with null blue and green section hence i can only see the red areas of the image .It is working fine .
But the problem is with the for loop. I have to iterate for loop depend on the the height and width of the image . For e.g . If height of the image is 300 and width is 400 then is have to iterate the for loop for 300 * 400 =120000 times Which i don't think is a better way .So is there any way to do this efficiently ? is there any open source library to achieve this ?
You can begin optimizing your code. There are a lot of unnecessary operations and float/integer conversions there.
for ( NSUInteger i = 0; i < width * height ; i ++ ) {
NSUInteger red = rawData[i] ;
if ( red > 230 ) {
rawData[i+1] = 0 ;
rawData[i+2] = 0 ;
}
}

Boundry detect paper sheet opencv

I am new in openCV, I already detect edge of paper sheet but my result image is blurred after draw lines on edge, How I can draw lines on edges of paper sheet so my image quality remain unaffected.
what I am Missing..
My code is below.
Many thanks.
-(void)forOpenCV
{
if( imageView.image != nil )
{
cv::Mat greyMat=[self cvMatFromUIImage:imageView.image];
vector<vector<cv::Point> > squares;
cv::Mat img= [self debugSquares: squares: greyMat ];
imageView.image =[self UIImageFromCVMat: img];
}
}
- (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image
{
NSLog(#"%lu",squares.size());
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(image.size(), CV_8U), gray;
vector<vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&image, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
NSLog(#"%lu",squares.size());
for( size_t i = 0; i < squares.size(); i++ )
{
cv:: Rect rectangle = boundingRect(Mat(squares[i]));
if(i==squares.size()-1)////Detecting Rectangle here
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(#"%d",n);
line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), Scalar(255,0,0),2,8);
polylines(image, &p, &n, 1, true, Scalar(255,255,0), 5, CV_AA);
fx1=rectangle.x;
fy1=rectangle.y;
fx2=rectangle.x+rectangle.width;
fy2=rectangle.y+rectangle.height;
line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), Scalar(0,0,255),2,8);
}
}
return image;
}
Instead of
Mat blurred(image);
you need to do
Mat blurred = image.clone();
Because the first line does not copy the image, but just creates a second pointer to the same data.
When you blurr the image, you are also changing the original.
What you need to do instead is, to create a real copy of the actual data and operate on this copy.
The OpenCV reference states:
by using a copy constructor or assignment operator, where on the right side it can
be a matrix or expression, see below. Again, as noted in the introduction, matrix assignment is O(1) operation because it only copies the header and increases the reference counter.
Mat::clone() method can be used to get a full (a.k.a. deep) copy of the matrix when you need it.
The first problem is easily solved by doing the entire processing on a copy of the original image. That way, after you get all the points of the square you can draw the lines on the original image and it will not be blurred.
The second problem, which is cropping, can be solved by defining a ROI (region of interested) in the original image and then copying it to a new Mat. I've demonstrated that in this answer:
// Setup a Region Of Interest
cv::Rect roi;
roi.x = 50
roi.y = 10
roi.width = 400;
roi.height = 450;
// Crop the original image to the area defined by ROI
cv::Mat crop = original_image(roi);
cv::imwrite("cropped.png", crop);

Looking for some help working with premultiplied alpha

I am trying to update a source image with the contents of multiple destination images. From what I can tell using premultiplied alpha is the way to go with this, but I think I am doing something wrong (function below). the image I am starting with is initialized with all ARGB values set to 0. When I run the function once the resulting image looks great, but when I start compositing on any others all the pixels that have alpha information get really messed up. Does anyone know if I am doing something glaringly wrong or if there is something extra I need to do to modify the color values?
void CompositeImage(unsigned char *src, unsigned char *dest, int srcW, int srcH){
int w = srcW;
int h = srcH;
int px0;
int px1;
int px2;
int px3;
int inverseAlpha;
int r;
int g;
int b;
int a;
int y;
int x;
for (y = 0; y < h; y++) {
for (x= 0; x< w*4; x+=4) {
// pixel number
px0 = (y*w*4) + x;
px1 = (y*w*4) + (x+1);
px2 = (y*w*4) + (x+2);
px3 = (y*w*4) + (x+3);
inverseAlpha = 1 - src[px3];
// create new values
r = src[px0] + inverseAlpha * dest[px0];
g = src[px1] + inverseAlpha * dest[px1];
b = src[px2] + inverseAlpha * dest[px2];
a = src[px3] + inverseAlpha * dest[px3];
// update destination image
dest[px0] = r;
dest[px1] = g;
dest[px2] = b;
dest[px3] = a;
}
}
}
I'm not clear on what data you are working with. Do your source images already have the alpha values pre-multiplied as they are stored? If not, then pre-multiplied alpha does not apply here and you would need to do normal alpha blending.
Anyway, the big problem in your code is that you're not keeping track of the value ranges that you're dealing with.
inverseAlpha = 1 - src[px3];
This needs to be changed to:
inverseAlpha = 255 - src[px3];
You have all integral value types here, so the normal incoming 0..255 value range will result in an inverseAlpha range of -254..1, which will give you some truly wacky results.
After changing the 1 to 255, you also need to divide your results for each channel by 255 to scale them back down to the appropriate range. The alternative is to do the intermediate calculations using floats instead of integers and divide the initial channel values by 255.0 (instead of these other changes) to get values in the 0..1 range.
If your source data really does already have pre-multiplied alpha, then your result lines should look like this.
r = src[px0] + inverseAlpha * dest[px0] / 255;
If your source data does not have pre-multiplied alpha, then it should be:
r = src[px0] * src[px3] / 255 + inverseAlpha * dest[px0] / 255;
There's nothing special about blending the alpha channel. Use the same calculation as for r, g, and b.