iOS iPhone CGImage Memory problems - iphone

I'm trying to create a visual representation of some data I have.
The function I have works and creates the image perfectly (and VERY quickly) but under instruments the Real Memory usage rockets and eventually crashes the app.
I have replaced my function with a return [UIImage imageNamed:#"blah"]; and the memory problems vanish completely.
Was wondering if someone could see why and where the memory is being taken up and how I could possibly free it again?
I have run in instruments under the allocations and leaks tool but nothing shows up in there.
The function is...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
//return image
return image;
}
a few notes...
buffer is an ivar GLubyte malloc-ed in the init.
cells is the original unsigned char array that I'm taking the data from.
Like I said, the function works perfectly (creating the image etc...) I just get a massive usage of memory.
Oh... this function is called LOTS, like 30 times a second or more (which I know will make a difference).
Thanks for any help.

You use CGImageCreate and the docs say :
Return Value
A new Quartz bitmap image. You are responsible for releasing this object by calling CGImageRelease.

Related

How to convert RGB pixmap to ui.Image in Dart?

Currently I have a Uint8List, formatted like [R,G,B,R,G,B,...] for all the pixels of the image. And of course I have its width and height.
I found decodeImageFromPixels while searching but it only takes RGBA/BGRA format. I converted my pixmap from RGB to RGBA and this function works fine.
However, my code now looks like this:
Uint8List rawPixel = raw.value.asTypedList(w * h * channel);
List<int> rgba = [];
for (int i = 0; i < rawPixel.length; i++) {
rgba.add(rawPixel[i]);
if ((i + 1) % 3 == 0) {
rgba.add(0);
}
}
Uint8List rgbaList = Uint8List.fromList(rgba);
Completer<Image> c = Completer<Image>();
decodeImageFromPixels(rgbaList, w, h, PixelFormat.rgba8888, (Image img) {
c.complete(img);
});
I have to make a new list(waste in space) and iterate through the entire list(waste in time).
This is too inefficient in my opinion, is there any way to make this more elegant? Like add a new PixelFormat.rgb888?
Thanks in advance.
You may find that this loop is faster as it doesn't keep appending to the list and then copy it at the end.
final rawPixel = raw.value.asTypedList(w * h * channel);
final rgbaList = Uint8List(w * h * 4); // create the Uint8List directly as we know the width and height
for (var i = 0; i < w * h; i++) {
final rgbOffset = i * 3;
final rgbaOffset = i * 4;
rgbaList[rgbaOffset] = rawPixel[rgbOffset]; // red
rgbaList[rgbaOffset + 1] = rawPixel[rgbOffset + 1]; // green
rgbaList[rgbaOffset + 2] = rawPixel[rgbOffset + 2]; // blue
rgbaList[rgbaOffset + 3] = 255; // a
}
An alternative is to prepend the array with a BMP header by adapting this answer (though it would simpler as there would be no palette) and passing that bitmap to instantiateImageCodec as that code is presumably highly optimized for parsing bitmaps.

Save frame from TangoService_connectOnFrameAvailable

How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this:
static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
std::ofstream fp;
fp.open(imagefile, std::ios::out | std::ios::binary );
int offset = 0;
for(int i = 0; i < buffer->height*2 + 1; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
fp.close();
}
Then to get rid of the meta data in the first row and to display the image I run:
$ dd if="input.raw" of="new.raw" bs=1 skip=1280
$ vooya new.raw
I was careful to make sure in vooya that the channel order is yvu. The resulting output is:
What am I doing wrong in saving the image and displaying it?
UPDATE per Mark Mullin's response:
int offset = buffer->stride; // header offset
// copy Y channel
for(int i = 0; i < buffer->height; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
// copy V channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
// copy U channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
This now shows the picture below, but there are still some artifacts; I wonder if that's from the Tango tablet camera or my processing of the raw data... any thoughts?
Can't say exactly what you're doing wrong AND tango images often have artifacts in them - yours are new, but I often see baby blue as a color where glare seems to be annoying deeper systems, and as it begins to loose sync with the depth system under load, you'll often see what looks like a shiny grid (its the IR pattern, I think) - At the end, any rational attempt to handle the image with openCV etc failed, so I hand wrote the decoder with some help from SO thread here
That said, given imagebuffer contains a pointer to the raw data from Tango, and various other variables like height and stride are filled in from the data received in the callback, then this logic will create an RGBA map - yeah, I optimized the math in it, so it's a little ugly - it's slower but functionally equivalent twin is listed second. My own experience says its a horrible idea to try and do this decode right in the callback (I believe Tango is capable of loosing sync with the flash for depth for purely spiteful reasons), so mine runs at the render stage.
Fast
uchar* pData = TangoData::cameraImageBuffer;
uchar* iData = TangoData::cameraImageBufferRGBA;
int size = (int)(TangoData::imageBufferStride * TangoData::imageBufferHeight);
float invByte = 0.0039215686274509803921568627451; // ( 1 / 255)
int halfi, uvOffset, halfj, uvOffsetHalfj;
float y_scaled, v_scaled, u_scaled;
int uOffset = size / 4 + size;
int halfstride = TangoData::imageBufferStride / 2;
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
halfi = i / 2;
uvOffset = halfi * halfstride;
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
halfj = j / 2;
uvOffsetHalfj = uvOffset + halfj;
y_scaled = pData[i * TangoData::imageBufferStride + j] * invByte;
v_scaled = 2 * (pData[uvOffsetHalfj + size] * invByte - 0.5f) * Vmax;
u_scaled = 2 * (pData[uvOffsetHalfj + uOffset] * invByte - 0.5f) * Umax;
*iData++ = (uchar)((y_scaled + 1.13983f * v_scaled) * 255.0);;
*iData++ = (uchar)((y_scaled - 0.39465f * u_scaled - 0.58060f * v_scaled) * 255.0);
*iData++ = (uchar)((y_scaled + 2.03211f * u_scaled) * 255.0);
*iData++ = 255;
}
}
Understandable
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
uchar y = pData[i * image->stride + j];
uchar v = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size];
uchar u = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size + (size / 4)];
YUV2RGB(y, u, v);
*iData++ = y;
*iData++ = u;
*iData++ = v;
*iData++ = 255;
}
}
I think that there is a better way to do if you can to do it offline.
The best way to save the image should be something like this (don't forgot to create the folder Pictures or you won't save anything)
void onFrameAvailableRouter(void* context, TangoCameraId id, const TangoImageBuffer* buffer) {
//To write the image in a txt file.
std::stringstream name_stream;
name_stream.setf(std::ios_base::fixed, std::ios_base::floatfield);
name_stream.precision(3);
name_stream << "/storage/emulated/0/Pictures/"
<<cur_frame_timstamp_
<<".txt";
std::fstream f(name_stream.str().c_str(), std::ios::out | std::ios::binary);
// size = 1280*720*1.5 to save YUV or 1280*720 to save grayscale
int size = stride_ * height_ * 1.5;
f.write((const char *) buffer->data,size * sizeof(uint8_t));
f.close();
}
Then to convert the .txt file to png you can do this
inputFolder = "input"
outputFolderRGB = "output/rgb"
outputFolderGray = "output/gray"
input_filename = "timestamp.txt"
output_filename = "rgb.png"
allFile = listdir(inputFolder)
numberOfFile = len(allFile)
if "input" in glob.glob("*"):
if "output/rgb" in glob.glob("output/*"):
print ""
else:
makedirs("output/rgb")
if "output/gray" in glob.glob("output/*"):
print ""
else:
makedirs("output/gray")
#The output reportories are ready
for file in allFile:
count+=1
print "current file : ",count,"/",numberOfFile
input_filename = file
output_filename = input_filename[0:(len(input_filename)-3)]+"png"
# load file into buffer
data = np.fromfile(inputFolder+"/"+input_filename, dtype=np.uint8)
#To get RGB image
# create yuv image
yuv = np.ndarray((height + height / 2, width), dtype=np.uint8, buffer=data)
# create a height x width x channels matrix with the datatype uint8 for rgb image
img = np.zeros((height, width, channels), dtype=np.uint8);
# convert yuv image to rgb image
cv2.cvtColor(yuv, cv2.COLOR_YUV2BGRA_NV21, img, channels)
cv2.imwrite(outputFolderRGB+"/"+output_filename, img)
#If u saved the image in graysacale use this part instead
#yuvReal = np.ndarray((height, width), dtype=np.uint8, buffer=data)
#cv2.imwrite(outputFolderGray+"/"+output_filename, yuvReal)
else:
print "not any input"
You just have to put your .txt in a folder input
It's a python script but if you prefer a c++ version it's very close.

mono for android application

I am working on gaming application in mono android. I want sample code for background image scrolling vertically from top to bottom. I have a code but it is not working properly.So plz somebody help me.
mBGFarMoveY = mBGFarMoveY + 3;
int newFarY = mBackgroundImageFar.Height + (+ mBGFarMoveY);
if (newFarY <= 0)
{
mBGFarMoveY = 0;
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
}
else
{
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
canvas.DrawBitmap (mBackgroundImageFar,0, newFarY, null);
}
Thanks&Regard's,
Chakradhar.
What do you see and what do you expect? There are several issues with your code as far as I can see.
The position is not calculated based upon time, so scrolling will
be jumpy.
The overlap code doesn't look too great and blits a lot out of bounds. I'm not sure
what 'canvas' is, but if it is the one from android.graphics, you
can specify the source and destination rectangles to blit rather
than just the 'y' position.
so something like (untested and I've not written code for this platform before but you should get the idea):
y = (time_seconds * pixels_per_second);
y = y % image.Height; // wrap
src_rect.left = 0;
src_rect.right = image.Width - 1;
src_rect.top = y;
src_rect.bottom = image.Height - 1;
dst_rect.left = 0;
dst_rect.right = image.Width - 1;
dst_rect.top = 0;
dst_rect.bottom = image.Height - 1;
if (y == 0) {
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
else {
dst_rect.bottom = src_rect.height() - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
src_rect.top = 0;
src_rect.bottom = y - 1;
dst_rect.top = dst_rect.bottom + 1;
dst_rect.bottom = image.Height - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}

Converting Matlab gaussian derivatives to Opencv

I trying to convert an old exercise i made in matlab to OpenCV. Code is posted below. I havent been able to find any functions in OpenCv that does what i want, might be because of other names then what i expect.
Here are the outputs when taken the max response in each location as label. Clearly somethings wronge.
Here is the matlab code:
function responses = getBifResponsesEx(im, myEps, sigma, kernelSize)
if ( nargin == 3 )
if ( sigma >= 1 )
kernelSize = 6*sigma + 1;
else
kernelSize = 7;
end
end
responses = zeros(size(im,1), size(im,2), 7);
%
% Gaussian derivatives
%
kernVal = ceil(kernelSize/2) - 1;
x = (-kernVal:kernVal);
g = 1/(2*pi*sigma^2)*exp(-(x.^2./(2*(sigma^2))));
g = g/sum(g);
dg = -2*x/(2*sigma^2).*g*sigma;
ddg = ((2*x/(2*sigma^2)).^2 - 1/(sigma^2)).*g*sigma;
%
% Gaussian convolution of the image
%
s00 = filter2(g, im);
s00 = filter2(g', s00);
s10 = filter2(g', im);
s10 = filter2(dg, s10);
s01 = filter2(g, im);
s01 = filter2(dg', s01);
s11 = filter2(dg, im);
s11 = filter2(dg', s11);
s20 = filter2(g', im);
s20 = filter2(g', s20);
s20 = filter2(ddg, s20);
s02 = filter2(g, im);
s02 = filter2(g, s02);
s02 = filter2(ddg', s02);
%
% Symmetry types - MISSING CODE!!!!
%
lam = sigma^2*(s20+s02);
gam = sigma^2*(sqrt((s20-s02).^2+4*s11.^2));
responses(:,:,1) = myEps*s00;
responses(:,:,2) = 2*sigma*sqrt(s10.^2+s01.^2);
responses(:,:,3) = +lam;
responses(:,:,4) = -lam;
responses(:,:,5) = 2^-.5*(gam+lam);
responses(:,:,6) = 2^-.5*(gam-lam);
responses(:,:,7) = gam;
end
And here is my converted page. From what i can see, it goes wronge with the s20,s02 responses. Anyone able to tell me what to do?
void extract_bif_features(const cv::Mat & src,
std::vector<cv::Mat> & dst, BIFParams params)
{
float sigma = params.sigma;
float n=0;
int kernelSize;
if(sigma>=1)
kernelSize = 6*sigma + 1;
else
kernelSize = 7;
cv::Mat gray,p00,p10,p01,p11,p20,p02;
cv::cvtColor(src,gray,CV_BGR2GRAY);
auto kernVal = (int)ceil(kernelSize/2.0) - 1;
cv::Mat_<float> g(1,kernelSize);float*gp = g.ptr<float>();
cv::Mat_<float> dg(1,kernelSize);float*dgp = dg.ptr<float>();
cv::Mat_<float> ddg(1,kernelSize); float*ddgp = ddg.ptr<float>();
cv::Mat_<float> X(1,kernelSize);float*xp = X.ptr<float>();
auto gsum=0.0f;
for(int x = -kernVal;x<=kernVal;++x)
{
xp[x+kernVal] = x;
gp[x+kernVal] = 1/(2*CV_PI*sigma*sigma)*exp(-(x*x/(2*(sigma*sigma))));
gsum += gp[x+kernVal];
}
g = g/gsum;
cv::multiply((-2*X / (2*sigma*sigma)),g*sigma,dg);
cv::pow((2*X/(2*sigma*sigma)),2,ddg);
ddg -=1/(sigma*sigma);
cv::multiply(ddg,g*sigma,ddg);
std::cout << ddg<< std::endl;
std::cout << dg<< std::endl;
cv::sepFilter2D(gray,p00,CV_32FC1,g,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p01,CV_32FC1,dg,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p10,CV_32FC1,g,dg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p11,CV_32FC1,dg,dg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
//NOT SURE HERE
cv::sepFilter2D(gray,p20,CV_32FC1,g,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
//cv::sepFilter2D(p20,p20,CV_32FC1,1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p02,CV_32FC1,g,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); //cv::sepFilter2D(p02,p02,CV_32FC1,g,1,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(gray,p20,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p20,p20,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p20,p20,CV_32FC1,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(gray,p02,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p02,p02,CV_32FC1,g.t(),cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p02,p02,CV_32FC1,ddg.t(),cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
dst.resize(6);
auto sigma_square = sigma*sigma;
cv::Mat Lam = sigma_square * (p20+p02);
cv::Mat Gam ;
cv::sqrt((((p20-p02)*(p20-p02))+4*p11*p11),Gam);
Gam *= sigma_square ;
cv::Mat test = p10*p10;
//slop
cv::sqrt(p10*p10 + p01*p01,dst[0]);
dst[0] = dst[0]*2*sigma;//slop
//blob
dst[1] = Lam;
dst[2] = -1*Lam;
//line
dst[3] = sqrt(2.0f)*(Gam+Lam);
dst[4] = sqrt(2.0f)*(Gam-Lam);
//saddle
dst[5] = Gam;
}
The answer is from what i get sofar.
cv::multiply((p20-p02),(p20-p02),Gam); is not the same as Gam = (p20-p02)*(p20-p02);
Full Code: Classify according to higest response, Griffin(2008).
RIDAR_API void extract_bif_features(const cv::Mat & src,
std::vector<cv::Mat> & dst, BIFParams params)
{
float sigma = params.sigma;
float eta = params.eta;
int kernelSize;
if(sigma>=1)
kernelSize = 4*sigma + 1;
else
kernelSize = 5;
auto kernVal = (int)ceil(kernelSize/2.0) - 1;
cv::Mat_<float> g(1,kernelSize);float*gp = g.ptr<float>();
cv::Mat_<float> X(1,kernelSize);float*xp = X.ptr<float>();
auto gsum=0.0f;
for(int x = -kernVal;x<=kernVal;++x)
{
xp[x+kernVal] = x;
gp[x+kernVal] = 1/(2*CV_PI*sigma*sigma)*exp(-(x*x/(2*(sigma*sigma))));
gsum += gp[x+kernVal];
} g = g/gsum;
cv::Mat dg = -2*X.mul(g*sigma) / (2*sigma*sigma);
cv::Mat ddg = ((2*X/(2*sigma*sigma)).mul((2*X/(2*sigma*sigma))) - 1/(sigma*sigma)).mul(g*sigma);
cv::Mat gray,p00,p10,p01,p11,p20,p02;
cv::cvtColor(src,gray,CV_BGR2GRAY);
cv::filter2D(gray,p00,CV_32FC1,g);
cv::filter2D(p00,p00,CV_32FC1,g.t());
cv::filter2D(gray,p10,CV_32FC1,g.t());
cv::filter2D(p10,p10,CV_32FC1,dg);
cv::filter2D(gray,p01,CV_32FC1,g);
cv::filter2D(p01,p01,CV_32FC1,dg.t());
cv::filter2D(gray,p11,CV_32FC1,dg);
cv::filter2D(p11,p11,CV_32FC1,dg.t());
cv::filter2D(gray,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,ddg);
cv::filter2D(gray,p02,CV_32FC1,g);
cv::filter2D(p02,p02,CV_32FC1,g);
cv::filter2D(p02,p02,CV_32FC1,ddg.t());
#ifdef DISPLAY_WHILE_RUNNING
double max,min;
cv::imshow("p00",p00/255);
//
cv::minMaxIdx(p01,&min,&max);
cv::imshow("p01",(p01-min)/(max-min));
//
cv::minMaxIdx(p10,&min,&max);
cv::imshow("p10",(p10-min)/(max-min));
cv::minMaxIdx(p11,&min,&max);
cv::imshow("p11",(p11-min)/(max-min));
cv::minMaxIdx(p02,&min,&max);
cv::imshow("p02",(p02-min)/(max-min));
cv::minMaxIdx(p20,&min,&max);
cv::imshow("p20",(p20-min)/(max-min));
cv::waitKey();
#endif
dst.resize(7);
auto sigma_square = sigma*sigma;
auto p2d = p20-p02;
//LAM
dst[2] = sigma_square * (p20+p02);
//GAM
cv::sqrt( (p2d).mul(p2d) + (4.0f * p11.mul(p11)) ,dst[6] );
dst[6] = dst[6] * sigma_square;
//FLAT
dst[0] = eta*p00;
//slop
cv::sqrt(p10.mul(p10)+p01.mul(p01),dst[1]);
dst[1] *= 2.0f*sigma;
//blob dst[2]
dst[3] = -dst[2];
//line
dst[4] = pow(2.0,-0.5)*(dst[6]+dst[2]);
dst[5] = pow(2.0,-0.5)*(dst[6]-dst[2]);
//saddle dst[6]
#ifdef DISPLAY_WHILE_RUNNING
double max,min;
cv::minMaxIdx(dst[0],&min,&max);//
cv::imshow("FLAT",(dst[0]-min)/(max-min));
cv::minMaxIdx(dst[1],&min,&max);//
cv::imshow("SLOPE",(dst[1]-min)/(max-min));
cv::minMaxIdx(dst[2],&min,&max);
cv::imshow("BLOB+",(dst[2]-min)/(max-min));
cv::minMaxIdx(dst[3],&min,&max);//
cv::imshow("BLOB-",(dst[3]-min)/(max-min));
cv::minMaxIdx(dst[4],&min,&max);//
cv::imshow("LINE+",(dst[4]-min)/(max-min));
cv::minMaxIdx(dst[5],&min,&max);
cv::imshow("LINE-",(dst[5]-min)/(max-min));
cv::minMaxIdx(dst[6],&min,&max);
cv::imshow("SADDLE",(dst[6]-min)/(max-min));
cv::waitKey();
#endif
}
why dont you take average on dg and ddg?
cv::filter2D(gray,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,g.t());
why take two times of filter here?
//GAM
cv::sqrt( (p2d).mul(p2d) + (4.0f * p11.mul(p11)) ,dst[6] );
dst[6] = dst[6] * sigma_square;
where you get this formula ?

color balance on the iPhone

I am taking an image, loading it via the screen context, and changing it pixel by pixel. I have a number of different filters that I am applying to the images, but the last thing I need to do is shift the color balance (similar to Photoshop) to make the red more cyan.
The code below shows how I am taking the image, getting the data, and going through the r/g/b values pixel by pixel:
CGImageRef sourceImage = theImage.image.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int dataLength = CFDataGetLength(theData);
int red = 0;
int green = 1;
int blue = 2;
for (int index = 0; index < dataLength; index += 4) {
int r = pixelData[index + red];
int g = pixelData[index + green];
int b = pixelData[index + blue];
// the color balancing would go here...
if (r < 0) r = 0;
if (g < 0) g = 0;
if (b < 0) b = 0;
if (r > 255) r = 255;
if (g > 255) g = 255;
if (b > 255) b = 255;
pixelData[index + red] = r;
pixelData[index + green] = g;
pixelData[index + blue] = b;
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage];
CGContextRelease(context);
CFRelease(theData);
CGImageRelease(newCGImage);
theImage.image = newImage;
I am doing a number of other things to the pixel data (setting the levels, desaturating) but I need to shift the red value toward cyan.
In Photoshop, I am able to do it via:
Image: Adjustment: Color Balance
and setting it to -30 0 0
I have not been able to find an algorithm or an example explaining how this color shift is performed. I have tried to subtract 30 from every red value, or setting a hard maximum at 255 - 30 (225), but those seem to clip the colors and not shift the values... Right now, I'm just hacking around on it, but pointers to a reference would help out a lot.
(NB: I am not able to use an OpenGL solution because I have to take the same algorithm and convert it to PHP/gd for a web server version of this application)
What you need to do is subtract the red and then correct the "lightness" to match the old color, for whatever measure of lightness you choose. Something like this should work:
// First, calculate the current lightness.
float oldL = r * 0.30 + g * 0.59 + b * 0.11;
// Adjust the color components. This changes lightness.
r = r - 30;
if (r < 0) r = 0;
// Now correct the color back to the old lightness.
float newL = r * 0.30 + g * 0.59 + b * 0.11;
if (newL > 0) {
r = r * oldL / newL;
g = g * oldL / newL;
b = b * oldL / newL;
}
Note this behaves somewhat oddly when given pure red (it will leave it unchanged). OTOH, GIMP (version 2.6.11) does the same thing in its color balance tool so I'm not too worried about it.