Trim alpha from CGImage - iphone

I need to get a trimmed CGImage. I have an image which has empty space (alpha = 0) around some colors and need to trim it to get the size of only the visible colors.
Thanks.

There's three ways of doing this :
1) Use photoshop (or image editor of choice) to edit the image - I assume you can't do this, it's too obvious an answer!
2) Ignore it - why not just ignore it, draw the image it's full size? It's transparent so the user will never notice.
3) Write some code that goes through each pixel in the image until it gets to one that has an alpha value > 0. This should give you the number of rows to trim from the top. However, this will slow down your UI so you might want to do it on a background thread.
e.g.
// To get the number of transparent rows at the top of the image
// Sorry this code is so ugly
uint32 *p = start_of_image;
while ( 0 == *p & 0x000000ff && p < end_of_image_data) ++p;
uint number_of_white_rows_at_top = (p - start_of_image) / width_of_image;
When you know the amount of transparent space from around the image you can draw it using a UIImageView, set the renderMode to center and let it do the trimming for you :)

Related

Wrong background subtraction

I'm trying to subtract the background of an image with two images.
Image A is the background and image B is an image with things over the background.
I'm normalizing the images but I don't get the expected result.
Here's the code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = ((a - min(a(:)))./(max(a(:))-min(a(:))));
resB = ((b - min(b(:)))./(max(b(:))-min(b(:))));
resAbs = abs(resB-resA);
imshow(resAbs);
The resulting image is a completely dark image. Thanks to the answer of the user saeed masoomi, I realized that was because of the data type, so now, I have the following code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = imsubtract(resB,resA);
imshow(resAbs,[]);
The resulting image is not well filtered and there are parts of image B that don't appear but they should.
If I try doing this without normalizing, I still have the same problem.
The only difference between image A and B are the arms that only appears in image B, so they should appear without any cut.
Can you see something wrong? Maybe I should filter with a threshold?
Do not normalize the two images. Background subtraction is typically done with identical camera settings, so the two images are directly comparable. If the background image doesn't have a bright object in it, normalizing like you do would brighten it w.r.t. the second image. The intensities are no longer comparable, and you'd see differences where there were none.
If you recorded the background image with different camera settings (different exposure time, illumination, etc) then background subtraction is a lot more complicated than you think. You'd have to apply an optimization scheme to make the two images comparable, such that their difference is sparse. You'd have to look through the literature for that, it's not at all trivial.
Hi please pay attention to your data type ... images in matlab save in unsigned char(or int) (8-bit 0 to 255 and there is no 0.1 or 0.2 or any float number so if you have 1.2 output will be 1).
you have a wrong computation in uint8 data like below
max=uint8(255); %uint8
min=uint8(20); %uint8
data=uint8(40); %uint8
normalized=(data-min)/(max-min) %uint8
output will be
normalized =
uint8
0
ooops, you may think that this output will be 0.0851 but it's not because data type is uint8 and output will be 0 ... so i guess your all data is zero( result image is dark ) ...so for prevent this mistake MATLAB have a handy function named im2double (convert uint8 to double and all data normalized between 0 and one)
I2 = im2double(I) converts the intensity image I to double precision, rescaling the data if necessary. I can be a grayscale intensity image, a truecolor image, or a binary image.
so we can rewrite your code like below
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = abs(imsubtract(a,b)); %edited
imshow(resAbs,[])
edited
so if again output image is dark you must be check that two image have different pixel by below code!!
if(isempty(nonzeros))
disp('Two image is diffrent -> normal')
else
disp('Two image is same -> something wrong')
end

Not getting appropriate image after background subtraction

I take two frames from my video .One of then is the background and the next is the frame to which I applied background subtraction.The third image is the result after background subtraction.Here I am only getting the shirt of the person rather than the whole body.
Code for backgorund subtraction
v = VideoReader('test.mp4');
n = get(v,'NumberOfFrames');
back = read(v,30);
y = read(v,150);
imshow([y;back;y-back]);
As white has probably a higher value (in each channel maybe? I don't know how the format of your data is). You get negative values which then I guess is cropped to 0 (black). See how your shirt is green as you subtract the red from it (board in the background).
You have to mask out the background by checking what has changed and then remove everything that hasn't changed.
maybe something like
diff =y-back
if ( element of diff unequal 0) then set element to 1
noback = diff .* y
a little example I wrote:
back = rand(4)
y = back
y(5) = 0.6 %put something in front of the background
y(7) = 0.7 %put something in front of the background
mask = zeros(4)
mask(find(y-back)) = 1 %set values that are different in y to 1
noback = mask.*y %elementwise multiplication to mask out the background
You may have to use something other than find for the mask, because the image will not be 100% the same, but this should show the general approach.

Copying a portion of an IplImage into another Iplimage (that is of same size is the source)

I have a set of mask images that I need to use everytime I recognise a previously-known scene on my camera. All the mask images are in IplImage format. There will be instances where, for example, the camera has panned to a slightly different but nearby location. this means that if I do a template matching somewhere in the middle of the current scene, I will be able to recognise the scene with some amount of shift of the template in this scene. All I need to do is use those shifts to adjust the mask image ROIs so that they can be overlayed appropriately based on the template-matching. I know that there are functions such as:
cvSetImageROI(Iplimage* img, CvRect roi)
cvResetImageROI(IplImage* img);
Which I can use to set crop/uncrop my image. However, it didn't work for me quit the way I expected. I would really appreciate if someone could suggest an alternative or what I am doing wrong, or even what I haven't thought of!
**I must also point out that I need to keep the image size same at all times. The only thing that will be different is the actual area of interest in the image. I can probably use the zero/one padding to cover the unused areas.
I believe a solution without making too many copies of the original image would be:
// Make a new IplImage
IplImage* img_src_cpy = cvCreateImage(cvGetSize(img_src), img_src->depth, img_src->nChannels);
// Crop Original Image without changing the ROI
for(int rows = roi.y; rows < roi.height; rows++) {
for(int cols = roi.x; rows < roi.width; cols++) {
img_src_cpy->imageData[(rows-roi.y)*img_src_cpy->widthStep + (cols-roi.x)] = img_src[rows*img_src + cols];
}
{
//Now copy everything to the original image OR simply return the new image if calling from a function
cvCopy(img_src_cpy, img_src); // OR return img_src_cpy;
I tried the code out on itself and it is also fast enough for me (executes in about 1 ms for 332 x 332 Greyscale image)

Method For Checking Image Pixel Intensities

I have an OCR based iPhone app that takes in grayscale images and thresholds them to black and white to find the text (using opencv). This works fine for images with black text on a white background. I am having an issue with automatically switching to an inverse threshold when the image is white text on a black background. Is there a widely used algorithm for checking the image to determine if it is light text on a dark background or vice versa? Can anyone recommend a clean working method? Keep in mind, I am only working with the grayscale image from the iPhone camera.
Thanks a lot.
Since I am dealing with a grayscale IplImage at this point, I could not count black or white pixels but had to count the number of pixels above a given "brightness" threshold. I just used the border pixels as this is less expensive and still gives me enough information to make a sound decision.
IplImage *image;
int sum = 0; // Number of light pixels
int threshold = 135; // Light/Dark intensity threshold
/* Count number of light pixels at border of image. Must convert to unsigned char type to make range 0-255. */
// Check every other pixel of top and bottom
for (int i=0; i<(image->width); i+=2) {
if ((unsigned char)image->imageData[i] >= threshold) { // Check top
sum++;
}
if ((unsigned char)image->imageData[(image->width)*(image->height)
- image->width + i] >= threshold) { // Check bottom
sum++;
}
}
//Check every other pixel of left and right Sides
for (int i=0; i<(image->height); i+=2) {
if ((unsigned char)image->imageData[i*(image->width)] >= threshold) { // Check left
sum++;
}
if ((unsigned char)image->imageData[i*(image->width) + (image->width) - 1] >= threshold) { // Check right
sum++;
}
}
// If more than half of the border pixels are light, use inverse threshold to find dark characters
if (sum > ((image->width/2) + (image->height/2))) {
// Use inverse binary threshold because background is light
}
else {
// Use standard binary threshold because background is dark
}
I would go over every pixel and check if it's bright or dark.
If the count of dark pixels is bigger than the bright ones, you have to invert the picture.
Look here for how to determinate the brightness:
Detect black pixel in image iOS
And that's how to draw an UIImage inverted:
[imyImage drawInRect:theImageRect blendMode:kCGBlendModeDifference alpha:1.0];

Detecting bright/dark points on iPhone screen

I would like to detect and mark the brightest and the darkest spot on an image.
For example I am creating an AVCaptureSession and showing the video frames on screen using AVCaptureVideoPreviewLayer. Now on this camera output view I would like to be able to mark the current darkest and lightest points.
Would i have to read Image pixel data? If so, how can i do that?
In any case, you must to read pixels to detect this. But if you whant to make it fast, dont read EVERY pixel: read only 1 of 100:
for (int x = 0; x < widgh-10; x+=10) {
for (int y = 0; y < height-10; y+=10) {
//Detect bright/dark points here
}
}
Then, you may read pixels around the ones you find, to make results more correct
here is the way to get pixel data: stackoverflow.com/questions/448125/… ... at the most bright point, red+green+blue must be maximum (225+225+225 = 675 = 100% white). At the most dark point red+green+blue must bo minimum (0 = 100% black).