Method For Checking Image Pixel Intensities - iphone

I have an OCR based iPhone app that takes in grayscale images and thresholds them to black and white to find the text (using opencv). This works fine for images with black text on a white background. I am having an issue with automatically switching to an inverse threshold when the image is white text on a black background. Is there a widely used algorithm for checking the image to determine if it is light text on a dark background or vice versa? Can anyone recommend a clean working method? Keep in mind, I am only working with the grayscale image from the iPhone camera.
Thanks a lot.

Since I am dealing with a grayscale IplImage at this point, I could not count black or white pixels but had to count the number of pixels above a given "brightness" threshold. I just used the border pixels as this is less expensive and still gives me enough information to make a sound decision.
IplImage *image;
int sum = 0; // Number of light pixels
int threshold = 135; // Light/Dark intensity threshold
/* Count number of light pixels at border of image. Must convert to unsigned char type to make range 0-255. */
// Check every other pixel of top and bottom
for (int i=0; i<(image->width); i+=2) {
if ((unsigned char)image->imageData[i] >= threshold) { // Check top
sum++;
}
if ((unsigned char)image->imageData[(image->width)*(image->height)
- image->width + i] >= threshold) { // Check bottom
sum++;
}
}
//Check every other pixel of left and right Sides
for (int i=0; i<(image->height); i+=2) {
if ((unsigned char)image->imageData[i*(image->width)] >= threshold) { // Check left
sum++;
}
if ((unsigned char)image->imageData[i*(image->width) + (image->width) - 1] >= threshold) { // Check right
sum++;
}
}
// If more than half of the border pixels are light, use inverse threshold to find dark characters
if (sum > ((image->width/2) + (image->height/2))) {
// Use inverse binary threshold because background is light
}
else {
// Use standard binary threshold because background is dark
}

I would go over every pixel and check if it's bright or dark.
If the count of dark pixels is bigger than the bright ones, you have to invert the picture.
Look here for how to determinate the brightness:
Detect black pixel in image iOS
And that's how to draw an UIImage inverted:
[imyImage drawInRect:theImageRect blendMode:kCGBlendModeDifference alpha:1.0];

Related

Ellipsoid fitting for 3D data on matlab

I am working on a 3D volume of CT lung images, In order to detect nodules I need to fit an ellipsoid model for each suspected nodule, How can I make a code for that ???
Nodule is the suspected object to be a tumor, my algorithm needs to check every object, and approximate it to an ellipsoid, and from the ellipsoid parameters we calculate 8 features to build a classifier which detects whether it a nodule or not through training and testing data, so I need to fit such ellipsoid
here one slice of the volume CT lung image
here another slice of the same volume but it contains a nodule (the yellow circle there is a nodule) so I need my code to check every shape to determine whether it is a nodule or not
As we do not have 3D dataset at disposal I start with 2D.
So first we need to select the lungs so we do not count any other objects then those inside. As this is gray scale we first need to binarize it somehow. I use my own class picture for DIP and this will heavily use my growthfill so I strongly recommend to first read this:
Fracture detection in hand using image proccessing
Where you will find all explanations you need for this. Now for your task I would:
turn RGBA to grayscale <0,765>
I just compute intensity i=R+G+B as for 24 bit image the channels are 8 bit the result is up to 3*255 = 765. As the input image was compressed by JPEG there are distortions in color and also noise in the image so do not forget about that.
crop out the white border
Just cast rays (scan lines) from middle of each outer line of the border towards middle and stop if non white-ish pixel hit. I treshold with 700 instead of 765 to compensate the noise in image. Now you got the bounding box of usable image so crop out the rest.
compute histogram
To compensate distortions in image smooth the histogram enough to remove all unwanted noise and gaps. Then find the local maximum from left and from right (red). This will be used for binarisation treshold (middle between them green) This is my final result:
binarise image
Just treshold image against the **green* intensity from histogram. So if i0,i1 are the local maximum intensities from left and right in the histogram then treshold against (i0+i1)/2. This is the result:
remove everything except lungs
That is easy just fill the black from outside to some predefined background color. Then the same way all the white stuff neighboring background color. That will remove human surface, skeleton, organs and the CT machine leaving just the lungs. Now recolor the remaining black with some predefined Lungs color.
There should be no black color left and the remaining white are the possible nodules.
process all remaining white pixels
So just loop through the image and on first white pixel hit flood fill it with predefined nodule color or distinct object index for latter use. I also distinct the surface (aqua) and the inside (magenta). This is the result:
Now you can compute your features per nodule. If you code your custom floodfill for this then you can obtain from it directly things like:
Volume in [pixels]
Surface area in [pixels]
Bounding box
Position (relative to Lungs)
Center of gravity or centroid
Which all can be used as your feature variables and also to help with fitting.
fit the found surface points
There are many methods for this but I would ease up it as much as I could to improve performance and accuracy. For example you can use centroid as your ellipsoid center. Then find the min and max distant points from it and use them as semi-axises (+/- some orthogonality corrections). And then just search around these initial values. For more info see:
How approximation search works
You will find examples of use in linked Q/As there.
[Notes]
All of the bullets are applicable in 3D. While constructing custom floodfill be careful with the recursion tail. Too much info will really fast overflow your stack and also slow things down considerably. Here small example of how I deal with it with few custom return parameters + growthfill I used:
//---------------------------------------------------------------------------
void growfill(DWORD c0,DWORD c1,DWORD c2); // grow/flood fill c0 neigbouring c1 with c2
void floodfill(int x,int y,DWORD c); // flood fill from (x,y) with color c
DWORD _floodfill_c0,_floodfill_c1; // recursion filled color and fill color
int _floodfill_x0,_floodfill_x1,_floodfill_n; // recursion bounding box and filled pixel count
int _floodfill_y0,_floodfill_y1;
void _floodfill(int x,int y); // recursion for floodfill
//---------------------------------------------------------------------------
void picture::growfill(DWORD c0,DWORD c1,DWORD c2)
{
int x,y,e;
for (e=1;e;)
for (e=0,y=1;y<ys-1;y++)
for ( x=1;x<xs-1;x++)
if (p[y][x].dd==c0)
if ((p[y-1][x].dd==c1)
||(p[y+1][x].dd==c1)
||(p[y][x-1].dd==c1)
||(p[y][x+1].dd==c1)) { e=1; p[y][x].dd=c2; }
}
//---------------------------------------------------------------------------
void picture::_floodfill(int x,int y)
{
if (p[y][x].dd!=_floodfill_c0) return;
p[y][x].dd=_floodfill_c1;
_floodfill_n++;
if (_floodfill_x0>x) _floodfill_x0=x;
if (_floodfill_y0>y) _floodfill_y0=y;
if (_floodfill_x1<x) _floodfill_x1=x;
if (_floodfill_y1<y) _floodfill_y1=y;
if (x> 0) _floodfill(x-1,y);
if (x<xs-1) _floodfill(x+1,y);
if (y> 0) _floodfill(x,y-1);
if (y<ys-1) _floodfill(x,y+1);
}
void picture::floodfill(int x,int y,DWORD c)
{
if ((x<0)||(x>=xs)||(y<0)||(y>=ys)) return;
_floodfill_c0=p[y][x].dd;
_floodfill_c1=c;
_floodfill_n=0;
_floodfill_x0=x;
_floodfill_y0=y;
_floodfill_x1=x;
_floodfill_y1=y;
_floodfill(x,y);
}
//---------------------------------------------------------------------------
And here C++ code I did the example images with:
picture pic0,pic1;
// pic0 - source img
// pic1 - output img
int x0,y0,x1,y1,x,y,i,hist[766];
color c;
// copy source image to output
pic1=pic0;
pic1.pixel_format(_pf_u); // grayscale <0,765>
// 0xAARRGGBB
const DWORD col_backg=0x00202020; // gray
const DWORD col_lungs=0x00000040; // blue
const DWORD col_out =0x0000FFFF; // aqua nodule surface
const DWORD col_in =0x00800080; // magenta nodule inside
const DWORD col_test =0x00008040; // green-ish distinct color just for safe recoloring
// [remove white background]
// find white background area (by casting rays from middle of border into center of image till non white pixel hit)
for (x0=0 ,y=pic1.ys>>1;x0<pic1.xs;x0++) if (pic1.p[y][x0].dd<700) break;
for (x1=pic1.xs-1,y=pic1.ys>>1;x1> 0;x1--) if (pic1.p[y][x1].dd<700) break;
for (y0=0 ,x=pic1.xs>>1;y0<pic1.ys;y0++) if (pic1.p[y0][x].dd<700) break;
for (y1=pic1.ys-1,x=pic1.xs>>1;y1> 0;y1--) if (pic1.p[y1][x].dd<700) break;
// crop it away
pic1.bmp->Canvas->Draw(-x0,-y0,pic1.bmp);
pic1.resize(x1-x0+1,y1-y0+1);
// [prepare data]
// raw histogram
for (i=0;i<766;i++) hist[i]=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
hist[pic1.p[y][x].dd]++;
// smooth histogram a lot (remove noise and fill gaps due to compression and scanning nature of the image)
for (x=0;x<100;x++)
{
for (i=0;i<765;i++) hist[i]=(hist[i]+hist[i+1])>>1;
for (i=765;i>0;i--) hist[i]=(hist[i]+hist[i-1])>>1;
}
// find peaks in histogram (for tresholding)
for (x=0,x0=x,y0=hist[x];x<766;x++)
{
y=hist[x];
if (y0<y) { x0=x; y0=y; }
if (y<y0>>1) break;
}
for (x=765,x1=x,y1=hist[x];x>=0;x--)
{
y=hist[x];
if (y1<y) { x1=x; y1=y; }
if (y<y1>>1) break;
}
// binarize image (tresholding)
i=(x0+x1)>>1; // treshold with middle intensity between peeks
pic1.pf=_pf_rgba; // result will be RGBA
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd>=i) pic1.p[y][x].dd=0x00FFFFFF;
else pic1.p[y][x].dd=0x00000000;
pic1.save("out0.png");
// recolor outer background
for (x=0;x<pic1.xs;x++) pic1.p[ 0][x].dd=col_backg; // render rectangle along outer border so the filling starts from there
for (x=0;x<pic1.xs;x++) pic1.p[pic1.ys-1][x].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][ 0].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][pic1.xs-1].dd=col_backg;
pic1.growfill(0x00000000,col_backg,col_backg); // fill its black content outside in
// recolor white human surface and CT machine
pic1.growfill(0x00FFFFFF,col_backg,col_backg);
// recolor Lungs dark matter
pic1.growfill(0x00000000,col_backg,col_lungs); // outer border
pic1.growfill(0x00000000,col_lungs,col_lungs); // fill its black content outside in
pic1.save("out1.png");
// find/recolor individual nodules
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
pic1.floodfill(x,y,col_test);
pic1.growfill(col_lungs,col_test,col_out);
pic1.floodfill(x,y,col_in);
}
pic1.save("out2.png");
// render histogram
for (x=0;(x<766)&&(x>>1<pic1.xs);x++)
for (y=0;(y<=hist[x]>>6)&&(y<pic1.ys);y++)
pic1.p[pic1.ys-1-y][x>>1].dd=0x000040FF;
for (x=x0 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=x1 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=(x0+x1)>>1,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x0000FF00;
you may be interested in a recent plugin that we developed for the open-source software Icy http://icy.bioimageanalysis.org/
The plugin name is FitEllipsoid, it allows rapidly fitting an ellipsoid to the image contents by first clicking on a few points on the orthogonal view.
A tutorial is available here: https://www.youtube.com/watch?v=MjotgTZi6RQ
Also note that we provide Matlab and Java source codes on GitHub (but I cannot provide them since it is my first appearance on the website).

Extract black objects from color background

It is easy for human eyes to tell black from other colors. But how about computers?
I printed some color blocks on the normal A4 paper. Since there are three kinds of ink to compose a color image, cyan, magenta and yellow, I set the color of each block C=20%, C=30%, C=40%, C=50% and rest of two colors are 0. That is the first column of my source image. So far, no black (K of CMYK) ink is supposed to print. After that, I set the color of each dot K=100% and rest colors are 0 to print black dots.
You may feel my image is weird and awful. In fact, the image is magnified 30 times and how the ink cheat our eyes can be seen clearly. The color strips hamper me to recognize these black dots (the dot is printed as just one pixel in 800 dpi). Without the color background, I used to blur and do canny edge detector to extract the edge. However, when adding color background, simply do grayscale and edge detector cannot get good results because of the strips. How will my eyes do in order to solve such problems?
I determined to check the brightness of source image. I referred this article and formula:
brightness = sqrt( 0.299 R * R + 0.587 G * G + 0.114 B * B )
The brightness is more close to human perception and it works very well in the yellow background because the brightness of yellow is the highest compared with cyan and magenta. But how to make cyan and magenta strips as bright as possible? The expected result is that all the strips disappear.
More complicated image:
C=40%, M=40%
C=40%, Y=40%
Y=40%, M=40%
FFT result of C=40%, Y=40% brightness image
Anyone can give me some hints to remove the color strips?
#natan I tried FFT method you suggested me, but I was not lucky to get peak at both axis x and y. In order to plot the frequency as you did, I resized my image to square.
I would convert the image to the HSV colour space and then use the Value channel. This basically separates colour and brightness information.
This is the 50% cyan image
Then you can just do a simple threshold to isolate the dots.
I just did this very quickly and im sure you could get better results. Maybe find contours in the image and then remove any contours with a small area, to filter any remaining noise.
After inspecting the images, I decided that a robust threshold will be more simple than anything. For example, looking at the C=40%, M=40% photo, I first inverted the intensities so black (the signal) will be white just using
im=(abs(255-im));
we can inspect its RGB histograms using this :
hist(reshape(single(im),[],3),min(single(im(:))):max(single(im(:))));
colormap([1 0 0; 0 1 0; 0 0 1]);
so we see that there is a large contribution to some middle intensity whereas the "signal" which is now white, is mostly separated to higher value. I then applied a simple thresholds as follows:
thr = #(d) (max([min(max(d,[],1)) min(max(d,[],2))])) ;
for n=1:size(im,3)
imt(:,:,n)=im(:,:,n).*uint8(im(:,:,n)>1.1*thr(im(:,:,n)));
end
imt=rgb2gray(imt);
and got rid of objects smaller than some typical area size
min_dot_area=20;
bw=bwareaopen(imt>0,min_dot_area);
imagesc(bw);
colormap(flipud(bone));
here's the result together with the original image:
The origin of this threshold is from this code I wrote that assumed sparse signals in the form of 2-D peaks or blobs in a noisy background. By sparse I meant that there's no pile up of peaks. In that case, when projecting max(image) on the x or y axis (by (max(im,[],1) or (max(im,[],1) you get a good measure of the background. That is because you take the minimal intensity of the max(im) vector.
If you want to look at this differently you can look at the histogram of the intensities of the image. The background is supposed to be a normal distribution of some kind around some intensity, the signal should be higher than that intensity, but with much lower # of occurrences. By finding max(im) of one of the axes (x or y) you discover what was the maximal noise level.
You'll see that the threshold picks that point in the histogram where there are still some noise above it, but ALL the signal is above it too. that's why I adjusted it to be 1.1*thr. Last, there are many fancier ways to obtain a robust threshold, this is a quick and dirty way that in my view is good enough...
Thanks to everyone for posting his answer! After some search and attempt, I also come up with an adaptive method to extract these black dots from the color background. It seems that considering only the brightness could not solve the problem perfectly. Therefore natan's method which calculates and analyzes the RGB histogram is more robust. Unfortunately, I still cannot obtain a robust threshold to extract the black dots in other color samples, because things are getting more and more unpredictable when we add deeper color (e.g. Cyan > 60) or mix two colors together (e.g. Cyan = 50, Magenta = 50).
One day, I google "extract color" and TinEye's color extraction and color thief inspire me. Both of them are very cool application and the image processed by the former website is exactly what I want. So I determine to implement a similar stuff on my own. The algorithm I used here is k-means clustering. And some other related key words to search may be color palette, color quantation and getting dominant color.
I firstly apply Gaussian filter to smooth the image.
GaussianBlur(img, img, Size(5, 5), 0, 0);
OpenCV has kmeans function and it saves me a lot of time on coding. I modify this code.
// Input data should be float32
Mat samples(img.rows * img.cols, 3, CV_32F);
for (int i = 0; i < img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
for (int z = 0; z < 3; z++) {
samples.at<float>(i + j * img.rows, z) = img.at<Vec3b>(i, j)[z];
}
}
}
// Select the number of clusters
int clusterCount = 4;
Mat labels;
int attempts = 1;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10, 0.1), attempts, KMEANS_PP_CENTERS, centers);
// Draw clustered result
Mat cluster(img.size(), img.type());
for (int i = 0; i < img.rows; i++) {
for(int j = 0; j < img.cols; j++) {
int cluster_idx = labels.at<int>(i + j * img.rows, 0);
cluster.at<Vec3b>(i, j)[0] = centers.at<float>(cluster_idx, 0);
cluster.at<Vec3b>(i, j)[1] = centers.at<float>(cluster_idx, 1);
cluster.at<Vec3b>(i, j)[2] = centers.at<float>(cluster_idx, 2);
}
}
imshow("clustered image", cluster);
// Check centers' RGB value
cout << centers;
After clustering, I convert the result to grayscale and find the darkest color which is more likely to be the color of the black dots.
// Find the minimum value
cvtColor(cluster, cluster, CV_RGB2GRAY);
Mat dot = Mat::zeros(img.size(), CV_8UC1);
cluster.copyTo(dot);
int minVal = (int)dot.at<uchar>(dot.cols / 2, dot.rows / 2);
for (int i = 0; i < dot.rows; i += 3) {
for (int j = 0; j < dot.cols; j += 3) {
if ((int)dot.at<uchar>(i, j) < minVal) {
minVal = (int)dot.at<uchar>(i, j);
}
}
}
inRange(dot, minVal - 5 , minVal + 5, dot);
imshow("dot", dot);
Let's test two images.
(clusterCount = 4)
(clusterCount = 5)
One shortcoming of the k-means clustering is one fixed clusterCount cannot be applied to every image. Also clustering is not so fast for larger images. That's the issue annoys me a lot. My dirty method for better real time performance (on iPhone) is to crop 1/16 of the image and cluster the smaller area. Then compare all the pixels in the original image with each cluster center, and pick the pixel that are the nearest to the "black" color. I simply calculate euclidean distance between two RGB colors.
A simple method is to just threshold all the pixels. Here is this idea expressed in pseudo code.
for each pixel in image
if brightness < THRESHOLD
pixel = BLACK
else
pixel = WHITE
Or if you're always dealing with cyan, magenta and yellow backgrounds then maybe you might get better results with the criteria
if pixel.r < THRESHOLD and pixel.g < THRESHOLD and pixel.b < THRESHOLD
This method will only give good results for easy images where nothing except the black dots is too dark.
You can experiment with the value of THRESHOLD to find a good value for your images.
I suggest to convert to some chroma-based color space, like LCH, and adjust simultaneous thresholds on lightness and chroma. Here is the result mask for L < 50 & C < 25 for the input image:
Seems like you need adaptive thresholds since different values work best for different areas of the image.
You may also use HSV or HSL as a color space, but they are less perceptually uniform than LCH, derived from Lab.

(Unity3D) Paint with soft brush (logic)

During the last few days i was coding a painting behavior for a game am working on, and am currently in a very advanced phase, i can say that i have 90% of the work done and working perfectly, now what i need to do is being able to draw with a "soft brush" cause for now it's like am painting with "pixel style" and that was totally expected cause that's what i wrote,
my current goal consist of using this solution :
import a brush texture, this image
create an array that contain all The alpha values of that texture
When drawing use the array elements in order to define the new pixels alpha
And this is my code to do that (it's not very long, there is too much comments)
//The main painting method
//theObject = the object to be painted
//tmpTexture = the object current texture
//targetTexture = the new texture
void paint (GameObject theObject, Texture2D tmpTexture, Texture2D targetTexture)
{
//x and y are 2 floats from another class
//they store the coordinates of the pixel
//that get hit by the RayCast
int x = (int)(coordinates.pixelPos.x);
int y = (int)(coordinates.pixelPos.y);
//iterate through a block of pixels that goes fro
//Y and X and go #brushHeight Pixels up
// and #brushWeight Pixels right
for (int tmpY = y; tmpY<y+brushHeight; tmpY++) {
for (int tmpX = x; tmpX<x+brushWidth; tmpX++) {
//check if the current pixel is different from the target pixel
if (tmpTexture.GetPixel (tmpX, tmpY) != targetTexture.GetPixel (tmpX, tmpY)) {
//create a temporary color from the target pixel at the given coordinates
Color tmpCol = targetTexture.GetPixel (tmpX, tmpY);
//change the alpha of that pixel based on the brush alpha
//myBrushAlpha is a 2 Dimensional array that contain
//the different Alpha values of the brush
//the substractions are to keep the index in range
if (myBrushAlpha [tmpY - y, tmpX - x].a > 0) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
}
//set the new pixel to the current texture
tmpTexture.SetPixel (tmpX, tmpY, tmpCol);
}
}
}
//Apply
tmpTexture.Apply ();
//change the object main texture
theObject.renderer.material.mainTexture = tmpTexture;
}
Now the fun (and bad) part is the code did exactly what i asked for, but there is something that i didn't think of and i couldn't solve after spend the whole night trying,
the thing is that by asking to draw anytime with the brush alpha i found myself create a very weird effect which is decreasing the alpha value of an "old" pixel, so i tried to fix that by adding an if statement that check if the current alpha of the pixel is less than the equivalent brush alpha pixel, if it is, then augment the alpha to be equal to the brush, and if the pixel alpha is bigger, then keep adding the brush alpha value to it in order to have that "soft brushing" effect, and in code it become this :
if (myBrushAlpha [tmpY - y, tmpX - x].a > tmpCol.a) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
} else {
tmpCol.a += myBrushAlpha [tmpY - y, tmpX - x].a;
}
But after i've done that, i got the "pixelized brush" effect back, am not sure but i think maybe it's because am making these conditions inside a for loop so everything is executed before the end of the current frame so i don't see the effect, could it be that ?
Am really lost here and hope that you can put me in the right direction,
Thank you very much and have a great day

Detecting bright/dark points on iPhone screen

I would like to detect and mark the brightest and the darkest spot on an image.
For example I am creating an AVCaptureSession and showing the video frames on screen using AVCaptureVideoPreviewLayer. Now on this camera output view I would like to be able to mark the current darkest and lightest points.
Would i have to read Image pixel data? If so, how can i do that?
In any case, you must to read pixels to detect this. But if you whant to make it fast, dont read EVERY pixel: read only 1 of 100:
for (int x = 0; x < widgh-10; x+=10) {
for (int y = 0; y < height-10; y+=10) {
//Detect bright/dark points here
}
}
Then, you may read pixels around the ones you find, to make results more correct
here is the way to get pixel data: stackoverflow.com/questions/448125/… ... at the most bright point, red+green+blue must be maximum (225+225+225 = 675 = 100% white). At the most dark point red+green+blue must bo minimum (0 = 100% black).

Trim alpha from CGImage

I need to get a trimmed CGImage. I have an image which has empty space (alpha = 0) around some colors and need to trim it to get the size of only the visible colors.
Thanks.
There's three ways of doing this :
1) Use photoshop (or image editor of choice) to edit the image - I assume you can't do this, it's too obvious an answer!
2) Ignore it - why not just ignore it, draw the image it's full size? It's transparent so the user will never notice.
3) Write some code that goes through each pixel in the image until it gets to one that has an alpha value > 0. This should give you the number of rows to trim from the top. However, this will slow down your UI so you might want to do it on a background thread.
e.g.
// To get the number of transparent rows at the top of the image
// Sorry this code is so ugly
uint32 *p = start_of_image;
while ( 0 == *p & 0x000000ff && p < end_of_image_data) ++p;
uint number_of_white_rows_at_top = (p - start_of_image) / width_of_image;
When you know the amount of transparent space from around the image you can draw it using a UIImageView, set the renderMode to center and let it do the trimming for you :)