I use gpuimage to build a photography app. But when I select the front camera, the camera picture appears on the back is reversed (left, right)
Code here:
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageRGBFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
Who can tell me what I'm having problems?
Thank very much!
try this...
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
stillCamera.horizontallyMirrorFrontFacingCamera = NO;
stillCamera.horizontallyMirrorRearFacingCamera = NO;
filter = [[GPUImageRGBFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
try this :
[filterView setInputRotation:kGPUImageFlipHorizonal atIndex:0];
I think that you can easily change the final image :
UIImage *finalImage = //image from the camera
UIImage * flippedImage = [UIImage imageWithCGImage:finalImage.CGImage scale:finalImage.scale orientation:UIImageOrientationLeftMirrored];
Just set:
stillCamera.horizontallyMirrorFrontFacingCamera = YES;
will fix this problem.
Related
I'm trying to do an Overlay Blend of a stock image with the output of the camera feed where the stock image has less than 100% opacity. I figured I could just place a GPUImageOpacityFilter in the filter stack and everything would be fine:
GPUImageVideoCamera -> MY_GPUImageOverlayBlendFilter
GPUImagePicture -> GPUImageOpacityFilter (Opacity 0.1f) -> MY_GPUImageOverlayBlendFilter
MY_GPUImageOverlayBlendFilter -> GPUImageView
But what that resulted in wasn't a 0.1f alpha version of GPUImagePicture blended into the GPUImageVideoCamera, it resulted in kinda softening the colors/contrast of GPUImagePicture and blending that. So I did some searching and on a suggestion tried getting a UIImage out of the GPUImageOpacity filter using imageFromCurrentlyProcessedOutput and sending that into the BlendFilter:
GPUImagePicture -> MY_GPUImageOpacityFilter (Opacity 0.1f)
[MY_GPUImageOpacityFilter imageFromCurrentlyProcessedOutput]-> MY_alphaedImage
GPUImagePicture (MY_alphaedImage) -> MY_GPUImageOverlayBlendFilter
GPUImageVideoCamera -> MY_GPUImageOverlayBlendFilter
MY_GPUImageOverlayBlendFilter -> GPUImageView
And that worked exactly as I would expect. So, why do I have to imageFromCurrentlyProcessedOutput, shouldn't that just happen in line? Here are the code snippits of the two scenarios above:
first one:
//Create the GPUPicture
UIImage *image = [UIImage imageNamed:#"someFile"];
GPUImagePicture *textureImage = [[[GPUImagePicture alloc] initWithImage:image] autorelease];
//Create the Opacity filter w/0.5 opacity
GPUImageOpacityFilter *opacityFilter = [[[GPUImageOpacityFilter alloc] init] autorelease];
opacityFilter.opacity = 0.5f
[textureImage addTarget:opacityFilter];
//Create the blendFilter
GPUImageFilter *blendFilter = [[[GPUImageOverlayBlendFilter alloc] init] autorelease];
//Point the cameraDevice's output at the blendFilter
[self._videoCameraDevice addTarget:blendFilter];
//Point the opacityFilter's output at the blendFilter
[opacityFilter addTarget:blendFilter];
[textureImage processImage];
//Point the output of the blendFilter at our previewView
GPUImageView *filterView = (GPUImageView *)self.previewImageView;
[blendFilter addTarget:filterView];
second one:
//Create the GPUPicture
UIImage *image = [UIImage imageNamed:#"someFile"];
GPUImagePicture *textureImage = [[[GPUImagePicture alloc] initWithImage:image] autorelease];
//Create the Opacity filter w/0.5 opacity
GPUImageOpacityFilter *opacityFilter = [[[GPUImageOpacityFilter alloc] init] autorelease];
opacityFilter.opacity = 0.5f
[textureImage addTarget:opacityFilter];
//Process the image so we get a UIImage with 0.5 opacity of the original
[textureImage processImage];
UIImage *processedImage = [opacityFilter imageFromCurrentlyProcessedOutput];
GPUImagePicture *processedTextureImage = [[[GPUImagePicture alloc] initWithImage:processedImage] autorelease];
//Create the blendFilter
GPUImageFilter *blendFilter = [[[GPUImageOverlayBlendFilter alloc] init] autorelease];
//Point the cameraDevice's output at the blendFilter
[self._videoCameraDevice addTarget:blendFilter];
//Point the opacityFilter's output at the blendFilter
[processedTextureImage addTarget:blendFilter];
[processedTextureImage processImage];
//Point the output of the blendFilter at our previewView
GPUImageView *filterView = (GPUImageView *)self.previewImageView;
[blendFilter addTarget:filterView];
I was using GPUImage framework (some old version) to blend two images (adding border overlay to a certain image).
After I have updated to latest framework version, after applying such a blend, I get an empty black image.
I'm using next method:
- (void)addBorder {
if (currentBorder != kBorderInitialValue) {
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
GPUImagePicture *imageToProcess = [[GPUImagePicture alloc] initWithImage:self.imageToWorkWithView.image];
GPUImagePicture *border = [[GPUImagePicture alloc] initWithImage:self.imageBorder];
blendFilter.mix = 1.0f;
[imageToProcess addTarget:blendFilter];
[border addTarget:blendFilter];
[imageToProcess processImage];
self.imageToWorkWithView.image = [blendFilter imageFromCurrentlyProcessedOutput];
[blendFilter release];
[imageToProcess release];
[border release];
}
}
What is the problem?
You're forgetting to process the border image. After [imageToProcess processImage], add the line:
[border processImage];
For a two images being input into a blend, you have to use -processImage on both after they have been added to the blend filter. I changed the way that the blend filter works in order to fix some bugs, and this is how you need to do things now.
This is the code I'm currently using for merging two images with GPUImageAlphaBlendFilter.
GPUImagePicture *mainPicture = [[GPUImagePicture alloc] initWithImage:image];
GPUImagePicture *topPicture = [[GPUImagePicture alloc] initWithImage:blurredImage];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter setMix:0.5];
[mainPicture addTarget:blendFilter];
[topPicture addTarget:blendFilter];
[blendFilter useNextFrameForImageCapture];
[mainPicture processImage];
[topPicture processImage];
UIImage * mergedImage = [blendFilter imageFromCurrentFramebuffer];
Is it possible to implement animating sequence of images using CAAnimation? And i dont want to use UIImageView...
I need Something similar to UIImageView where we can set imageView.animationImages and calling startAnimation... but using CAAnimation
CAKeyframeAnimation *animationSequence = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
animationSequence.calculationMode = kCAAnimationLinear;
animationSequence.autoreverses = YES;
animationSequence.duration = kDefaultAnimationDuration;
animationSequence.repeatCount = HUGE_VALF;
NSMutableArray *animationSequenceArray = [[NSMutableArray alloc] init];
for (UIImage *image in self.animationImages)
{
[animationSequenceArray addObject:(id)image.CGImage];
}
animationSequence.values = animationSequenceArray;
[animationSequenceArray release];
[self.layer addAnimation:animationSequence forKey:#"contents"];
I've got an image which contains multiple layers. The user can set a picture as a layer. However when there isn't a picture available I'd like to have a replacement picture.
I think I got two options:
check if the image (taken photo) is loaded into my directory. if not place the other image.
placing the other image (photo_icon.png) to the back of the UIImageView (image1). When a photo hasn't been taken the image becomes visible.
Here's a snippit of my code so far.
NSString *docDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *pngFilePath = [NSString stringWithFormat:#"%#/photo1.png",docDir];
UIImage *img = [[UIImage alloc] initWithContentsOfFile:pngFilePath];
[image1 setImage:img];
image1.layer.cornerRadius = 15.0;
image1.layer.masksToBounds = YES;
CALayer *sublayer = [CALayer layer];
image1.layer.borderColor = [UIColor blackColor].CGColor;
image1.layer.borderWidth = 3.5;
[image1.layer addSublayer:sublayer];
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = image1.bounds;
imageLayer.cornerRadius = 10.0;
imageLayer.contents = (id) [UIImage imageNamed:#"Upperlayer.png"].CGImage;
imageLayer.masksToBounds = YES;
[sublayer addSublayer:imageLayer];
replacing the image1 for the image which is needed I use:
CALayer *imageBackLayer = [CALayer layer];
imageBackLayer.frame = image1.bounds;
imageBackLayer.cornerRadius = 10.0;
imageBackLayer.contents = (id) [UIImage imageNamed:#"photo_icon.png"].CGImage;
imageBackLayer.masksToBounds = NO;
[sublayer insertSublayer:imageBackLayer below: image1.layer ];
Thanks in advance!
you can use opacity for the image to show/hide.
I am using the Route-Me library for the iPhone. My problem is that i want to draw a path on the map for example.
I am in Dallas and i want to go new york then just i will put marker on these two places and path will be drawn between this two marker.
Can any body suggest me how this can be done.
If any other Map rather than RouteMe then also it's ok.
Following code will draw path btween 2 points. (in your case you have to add all route points)
// Set map view center coordinate
CLLocationCoordinate2D center;
center.latitude = 47.582;
center.longitude = -122.333;
slideLocation = center;
[mapView.contents moveToLatLong:center];
[mapView.contents setZoom:17.0f];
// Add 2 markers(start/end) and RMPath with 2 points
RMMarker *newMarker;
UIImage *startImage = [UIImage imageNamed:#"marker-blue.png"];
UIImage *finishImage = [UIImage imageNamed:#"marker-red.png"];
UIColor* routeColor = [[UIColor alloc] initWithRed:(27.0 /255) green:(88.0 /255) blue:(156.0 /255) alpha:0.75];
RMPath* routePath = [[RMPath alloc] initWithContents:mapView.contents];
[routePath setLineColor:routeColor];
[routePath setFillColor:routeColor];
[routePath setLineWidth:10.0f];
[routePath setDrawingMode:kCGPathStroke];
CLLocationCoordinate2D newLocation;
newLocation.latitude = 47.580;
newLocation.longitude = -122.333;
[routePath addLineToLatLong:newLocation];
newLocation.latitude = 47.599;
newLocation.longitude = -122.333;
[routePath addLineToLatLong:newLocation];
[[mapView.contents overlay] addSublayer:routePath];
newLocation.latitude = 47.580;
newLocation.longitude = -122.333;
newMarker = [[RMMarker alloc] initWithUIImage:startImage anchorPoint:CGPointMake(0.5, 1.0)];
[mapView.contents.markerManager addMarker:newMarker AtLatLong:newLocation];
[newMarker release];
newMarker = nil;
newLocation.latitude = 47.599;
newLocation.longitude = -122.333;
newMarker = [[RMMarker alloc] initWithUIImage:finishImage anchorPoint:CGPointMake(0.5, 1.0)];
[mapView.contents.markerManager addMarker:newMarker AtLatLong:newLocation];
[newMarker release];
newMarker = nil;
Route-me has an RMPath class for this purpose. Have you played with it? If so, what did you do, what did and what didn't work?
Route-Me how to draw a path on the map for example, not exactly but close.
Use RMPath to Draw a polygon on overlay layer
//polygonArray is a NSMutableArray of CLLocation
- (RMPath*)addLayerForPolygon:(NSMutableArray*)polygonArray toMap:(RMMapView*)map {
RMPath* polygonPath = [[[RMPath alloc] initForMap:map] autorelease];
[polygonPath setLineColor:[UIColor colorWithRed:1 green:0 blue:0 alpha:0.5]];
[polygonPath setFillColor:[UIColor colorWithRed:1 green:0 blue:0 alpha:0.5]];
[polygonPath setLineWidth:1];
BOOL firstPoint = YES;
for (CLLocation* loc in polygonArray) {
if (firstPoint) {
[polygonPath moveToLatLong:loc.coordinate];
firstPoint = NO;
} else {
[polygonPath addLineToLatLong:loc.coordinate];
}
}
[polygonPath closePath];
polygonPath.zPosition = -2.0f;
NSMutableArray *sublayers = [[[[mapView contents] overlay] sublayers] mutableCopy];
[sublayers insertObject:polygonPath atIndex:0];
[[[mapView contents] overlay] setSublayers:sublayers];
return polygonPath;
}
Note:
The latest RouteMe has addLineToCoordinate:(CLLocationCoordinate2D)coordinate instead of addLineToLatLong.
This is newer one found in the MapTestBed in example in Route-me
- (RMMapLayer *)mapView:(RMMapView *)aMapView layerForAnnotation:(RMAnnotation *)annotation
{
if ([annotation.annotationType isEqualToString:#"path"]) {
RMPath *testPath = [[[RMPath alloc] initWithView:aMapView] autorelease];
[testPath setLineColor:[annotation.userInfo objectForKey:#"lineColor"]];
[testPath setFillColor:[annotation.userInfo objectForKey:#"fillColor"]];
[testPath setLineWidth:[[annotation.userInfo objectForKey:#"lineWidth"] floatValue]];
CGPathDrawingMode drawingMode = kCGPathStroke;
if ([annotation.userInfo containsObject:#"pathDrawingMode"])
drawingMode = [[annotation.userInfo objectForKey:#"pathDrawingMode"] intValue];
[testPath setDrawingMode:drawingMode];
if ([[annotation.userInfo objectForKey:#"closePath"] boolValue])
[testPath closePath];
for (CLLocation *location in [annotation.userInfo objectForKey:#"linePoints"])
{
[testPath addLineToCoordinate:location.coordinate];
}
return testPath;
}
if ([annotation.annotationType isEqualToString:#"marker"]) {
return [[[RMMarker alloc] initWithUIImage:annotation.annotationIcon anchorPoint:annotation.anchorPoint] autorelease];
}
return nil;
}
Sorry, I've not seen anything like this - I would only note that if people are voting you down because they think you are talking about the new SDK mapping features, to read your question again where you are talking about a third party library (still possibly a valid need since you can't do turn-by-turn directions with the official map SDK).