I was using GPUImage framework (some old version) to blend two images (adding border overlay to a certain image).
After I have updated to latest framework version, after applying such a blend, I get an empty black image.
I'm using next method:
- (void)addBorder {
if (currentBorder != kBorderInitialValue) {
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
GPUImagePicture *imageToProcess = [[GPUImagePicture alloc] initWithImage:self.imageToWorkWithView.image];
GPUImagePicture *border = [[GPUImagePicture alloc] initWithImage:self.imageBorder];
blendFilter.mix = 1.0f;
[imageToProcess addTarget:blendFilter];
[border addTarget:blendFilter];
[imageToProcess processImage];
self.imageToWorkWithView.image = [blendFilter imageFromCurrentlyProcessedOutput];
[blendFilter release];
[imageToProcess release];
[border release];
}
}
What is the problem?
You're forgetting to process the border image. After [imageToProcess processImage], add the line:
[border processImage];
For a two images being input into a blend, you have to use -processImage on both after they have been added to the blend filter. I changed the way that the blend filter works in order to fix some bugs, and this is how you need to do things now.
This is the code I'm currently using for merging two images with GPUImageAlphaBlendFilter.
GPUImagePicture *mainPicture = [[GPUImagePicture alloc] initWithImage:image];
GPUImagePicture *topPicture = [[GPUImagePicture alloc] initWithImage:blurredImage];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter setMix:0.5];
[mainPicture addTarget:blendFilter];
[topPicture addTarget:blendFilter];
[blendFilter useNextFrameForImageCapture];
[mainPicture processImage];
[topPicture processImage];
UIImage * mergedImage = [blendFilter imageFromCurrentFramebuffer];
Related
I use gpuimage to build a photography app. But when I select the front camera, the camera picture appears on the back is reversed (left, right)
Code here:
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageRGBFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
Who can tell me what I'm having problems?
Thank very much!
try this...
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
stillCamera.horizontallyMirrorFrontFacingCamera = NO;
stillCamera.horizontallyMirrorRearFacingCamera = NO;
filter = [[GPUImageRGBFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
try this :
[filterView setInputRotation:kGPUImageFlipHorizonal atIndex:0];
I think that you can easily change the final image :
UIImage *finalImage = //image from the camera
UIImage * flippedImage = [UIImage imageWithCGImage:finalImage.CGImage scale:finalImage.scale orientation:UIImageOrientationLeftMirrored];
Just set:
stillCamera.horizontallyMirrorFrontFacingCamera = YES;
will fix this problem.
I'm trying to do an Overlay Blend of a stock image with the output of the camera feed where the stock image has less than 100% opacity. I figured I could just place a GPUImageOpacityFilter in the filter stack and everything would be fine:
GPUImageVideoCamera -> MY_GPUImageOverlayBlendFilter
GPUImagePicture -> GPUImageOpacityFilter (Opacity 0.1f) -> MY_GPUImageOverlayBlendFilter
MY_GPUImageOverlayBlendFilter -> GPUImageView
But what that resulted in wasn't a 0.1f alpha version of GPUImagePicture blended into the GPUImageVideoCamera, it resulted in kinda softening the colors/contrast of GPUImagePicture and blending that. So I did some searching and on a suggestion tried getting a UIImage out of the GPUImageOpacity filter using imageFromCurrentlyProcessedOutput and sending that into the BlendFilter:
GPUImagePicture -> MY_GPUImageOpacityFilter (Opacity 0.1f)
[MY_GPUImageOpacityFilter imageFromCurrentlyProcessedOutput]-> MY_alphaedImage
GPUImagePicture (MY_alphaedImage) -> MY_GPUImageOverlayBlendFilter
GPUImageVideoCamera -> MY_GPUImageOverlayBlendFilter
MY_GPUImageOverlayBlendFilter -> GPUImageView
And that worked exactly as I would expect. So, why do I have to imageFromCurrentlyProcessedOutput, shouldn't that just happen in line? Here are the code snippits of the two scenarios above:
first one:
//Create the GPUPicture
UIImage *image = [UIImage imageNamed:#"someFile"];
GPUImagePicture *textureImage = [[[GPUImagePicture alloc] initWithImage:image] autorelease];
//Create the Opacity filter w/0.5 opacity
GPUImageOpacityFilter *opacityFilter = [[[GPUImageOpacityFilter alloc] init] autorelease];
opacityFilter.opacity = 0.5f
[textureImage addTarget:opacityFilter];
//Create the blendFilter
GPUImageFilter *blendFilter = [[[GPUImageOverlayBlendFilter alloc] init] autorelease];
//Point the cameraDevice's output at the blendFilter
[self._videoCameraDevice addTarget:blendFilter];
//Point the opacityFilter's output at the blendFilter
[opacityFilter addTarget:blendFilter];
[textureImage processImage];
//Point the output of the blendFilter at our previewView
GPUImageView *filterView = (GPUImageView *)self.previewImageView;
[blendFilter addTarget:filterView];
second one:
//Create the GPUPicture
UIImage *image = [UIImage imageNamed:#"someFile"];
GPUImagePicture *textureImage = [[[GPUImagePicture alloc] initWithImage:image] autorelease];
//Create the Opacity filter w/0.5 opacity
GPUImageOpacityFilter *opacityFilter = [[[GPUImageOpacityFilter alloc] init] autorelease];
opacityFilter.opacity = 0.5f
[textureImage addTarget:opacityFilter];
//Process the image so we get a UIImage with 0.5 opacity of the original
[textureImage processImage];
UIImage *processedImage = [opacityFilter imageFromCurrentlyProcessedOutput];
GPUImagePicture *processedTextureImage = [[[GPUImagePicture alloc] initWithImage:processedImage] autorelease];
//Create the blendFilter
GPUImageFilter *blendFilter = [[[GPUImageOverlayBlendFilter alloc] init] autorelease];
//Point the cameraDevice's output at the blendFilter
[self._videoCameraDevice addTarget:blendFilter];
//Point the opacityFilter's output at the blendFilter
[processedTextureImage addTarget:blendFilter];
[processedTextureImage processImage];
//Point the output of the blendFilter at our previewView
GPUImageView *filterView = (GPUImageView *)self.previewImageView;
[blendFilter addTarget:filterView];
I want to draw a pie chart with some thickness. I have generated simple 2D pie chart. is there any way to make it 3D using some concepts of CALayer and then rotating in some direction.
-(void)CreatePieChart
{
graph = [[CPXYGraph alloc] initWithFrame: CGRectZero];
//CPGraphHostingView *hostingView = (CPGraphHostingView *)self.view;
viewGraphHostingPie.hostedGraph=graph;
CPPieChart *pieChart = [[CPPieChart alloc] init];
pieChart.dataSource = self;
pieChart.delegate = self;
pieChart.pieRadius = 100.0;
pieChart.identifier = #"PieChart1";
pieChart.startAngle=0;
pieChart.sliceDirection = CPPieDirectionCounterClockwise;
NSMutableArray *ValueArray = [[NSMutableArray alloc]init];
ValueArray = [NSArray arrayWithObjects:[NSNumber numberWithDouble:57.03],[NSNumber numberWithDouble:66.00],[NSNumber numberWithDouble:77.03],nil];
self.pieData = ValueArray;
CPTheme *theme = [CPTheme themeNamed:kCPDarkGradientTheme];
[graph applyTheme:theme];
[graph addPlot:pieChart];
[pieChart release];
}
It's not clear to me what you mean by "concepts of CALayer".
Have you considered using a free third party solution which will give you various charts, including pie charts, out of the box? Core Plot is free.
EDIT: Core Plot is 2D only, but you can produce a 3D effect by using shadows and/or overlay fill. See also the accepted answer here.
Is it possible to implement animating sequence of images using CAAnimation? And i dont want to use UIImageView...
I need Something similar to UIImageView where we can set imageView.animationImages and calling startAnimation... but using CAAnimation
CAKeyframeAnimation *animationSequence = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
animationSequence.calculationMode = kCAAnimationLinear;
animationSequence.autoreverses = YES;
animationSequence.duration = kDefaultAnimationDuration;
animationSequence.repeatCount = HUGE_VALF;
NSMutableArray *animationSequenceArray = [[NSMutableArray alloc] init];
for (UIImage *image in self.animationImages)
{
[animationSequenceArray addObject:(id)image.CGImage];
}
animationSequence.values = animationSequenceArray;
[animationSequenceArray release];
[self.layer addAnimation:animationSequence forKey:#"contents"];
I started some months ago an OpenGL project which became larger and larger... I began with the crashLanding sample and i use Texture2D.
I also use a singleton class to load my textures, and here is what the texture load looks like :
//Load the background texture and configure it
_textures[kTexture_Background] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"fond.png"]];
glBindTexture(GL_TEXTURE_2D, [_textures[kTexture_Background] name]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Load the textures
_textures[kTexture_Batiment] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"batiment_Ext.png"]];
_textures[kTexture_Balcon] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"balcon.png"]];
_textures[kTexture_Devanture] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"devanture.png"]];
_textures[kTexture_Cactus_Troncs] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus-troncs.png"]];
_textures[kTexture_Cactus_Gauche] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus1.png"]];
_textures[kTexture_Cactus_Droit] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus2.png"]];
_textures[kTexture_Pierre] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"pierre.png"]];
_textures[kTexture_Enseigne] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"enseigne.png"]];
_textures[kTexture_Menu] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"saloonIntro.jpg"]];
_textures[kTexture_GameOver] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"gameOver.jpg"]];
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText addObject:[[[NSData alloc] init] autorelease]];
}
// sort my array
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText replaceObjectAtIndex:i withObject:_textures[i]];
}
[dictionaryOfTexture setObject:[arrayOfText copy] forKey:kTextureDecor];
[arrayOfText removeAllObjects];
and so on for almost 50 pictures
It works well on the 3GS but there are some issues sometimes with 3G.
Am i wrong with all of this ?
Thanks
Things that you need to consider:
iPhone works only with textures that have dimensions of a power of 2 -- if your textures don't have such dimensions, and they still works, that means that they are resized by the software -- that takes time, and more importantly valuable texture memory.
iPhone has video memory for about 3 textures of 1024x1024 size -- you may run out of texture memory.
switching textures during rendering is slow... extremely slow. The less you switch textures the better – ideally, create at most 3 textures, and switch them at most 3 times
to achieve that you need to learn a technique called texture atlasing
You might be running out of texture memory, which isn't surprising
if you consider that the class you're using probably pads your image out to power of 2 dimensions.
You could use the texture memory better by combining your sprites into
a so called sprite atlas.