My goal is to use the Google Street View API to display a full pledged panorama scrollable street view image to the user. Basically the API provides me with many images where I can vary the direction, height, zoom, location etc. I can retrieve all these and hope to stitch them together and view it. The first question is, do you know any resources that demoes this full google street view demo working? Where a user can swipe around to move street view around, just like in that old iOS 5 Map Street View thing that I am sure we all miss...
If not, I will be basically downloading hundreds of photos that differ in vertical and horizontal direction. Is there a library or API or resource or method where I can stitch all these photos together to make a big panorama and make it so the user can swipe to view the big panorama on the tiny iPhone screen?
Thanks to everyone!
I threw together a quick implementation to do a lot of this as a demo for you. There are some excellent open source libraries out there that make an amateur version of StreetView very simple. You can check out my demo on GitHub: https://github.com/ocrickard/StreetViewDemo
You can use the heading and pitch parameters from the Google StreetView API to generate tiles. These tiles could be arranged in a UIScrollView as both Bilal and Geraud.ch suggest. However, I really like the JCTiledScrollView because it contains a pretty nice annotation system for adding pins on top of the images like Google does, and its datasource/delegate structure makes for some very straight forward image handling.
The meaty parts of my implementation follow:
- (UIImage *)tiledScrollView:(JCTiledScrollView *)scrollView imageForRow:(NSInteger)row column:(NSInteger)column scale:(NSInteger)scale
{
float fov = 45.f / scale;
float heading = fmodf(column*fov, 360.f);
float pitch = (scale - row)*fov;
if(lastRequestDate) {
while(fabsf([lastRequestDate timeIntervalSinceNow]) < 0.1f) {
//continue only if the time interval is greater than 0.1 seconds
}
}
lastRequestDate = [NSDate date];
int resolution = (scale > 1.f) ? 640 : 200;
NSString *path = [NSString stringWithFormat:#"http://maps.googleapis.com/maps/api/streetview?size=%dx%d&location=40.720032,-73.988354&fov=%f&heading=%f&pitch=%f&sensor=false", resolution, resolution, fov, heading, pitch];
NSError *error = nil;
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:path] options:0 error:&error];
if(error) {
NSLog(#"Error downloading image:%#", error);
}
UIImage *image = [UIImage imageWithData:data];
//Distort image using GPUImage
{
//This is where you should try to transform the image. I messed around
//with the math for awhile, and couldn't get it. Therefore, this is left
//as an exercise for the reader... :)
/*
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageTransformFilter *stillImageFilter = [[GPUImageTransformFilter alloc] init];
[stillImageFilter forceProcessingAtSize:image.size];
//This is actually based on some math, but doesn't work...
//float xOffset = 200.f;
//CATransform3D transform = [ViewController rectToQuad:CGRectMake(0, 0, image.size.width, image.size.height) quadTLX:-xOffset quadTLY:0 quadTRX:(image.size.width+xOffset) quadTRY:0.f quadBLX:0.f quadBLY:image.size.height quadBRX:image.size.width quadBRY:image.size.height];
//[(GPUImageTransformFilter *)stillImageFilter setTransform3D:transform];
//This is me playing guess and check...
CATransform3D transform = CATransform3DIdentity;
transform.m34 = fabsf(pitch) / 60.f * 0.3f;
transform = CATransform3DRotate(transform, pitch*M_PI/180.f, 1.f, 0.f, 0.f);
transform = CATransform3DScale(transform, 1.f/cosf(pitch*M_PI/180.f), sinf(pitch*M_PI/180.f) + 1.f, 1.f);
transform = CATransform3DTranslate(transform, 0.f, 0.1f * sinf(pitch*M_PI/180.f), 0.f);
[stillImageFilter setTransform3D:transform];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter prepareForImageCapture];
[stillImageSource processImage];
image = [stillImageFilter imageFromCurrentlyProcessedOutput];
*/
}
return image;
}
Now, in order to get the full 360 degree, infinite scrolling effect Google has, you would have to do some trickery in the observeValueForKeyPath method where you observe the contentOffset of the UIScrollView. I've started implementing this, but did not finish it. The idea is that when the user reaches either the left or right side of the view, the contentOffset property of the scrollView is pushed to the opposite side of the scrollView. If you can get the content to align properly, and you set up the contentSize just right, this should work.
Finally, I should note that the Google StreetView system has a limit of 10 images/second, so you have to throttle your requests or the IP address of the device will be blacklisted for a certain amount of time (my home internet is now blacked out from StreetView requests for the next few hours 'cause I didn't understand this at first).
you need to use UIScrollView, set its clipSubviews property to true, add all the images to the UIScrollView and the UIScrollView's contentsOffset according to the images.
You should use the method described on this post: http://mobiledevelopertips.com/user-interface/creating-circular-and-infinite-uiscrollviews.html
You will have to make some factory to transform images to the good size and to your viewport (iPhone/iPad). And then add some buttons where you can clicked to go to the next place.
Unfortunately, if you want to go to a globe version (instead of a tube one), I think you'll need to go full openGL to display images in this 3D surface.
Related
I have a simple rotation gesture implemented in my code, but the problem is when I rotate the image it goes off the screen/out of the view always to the right.
The image view that is being rotated center X gets off or increases (hence it going right off the screen out of the view).
I would like it to rotate around the current center, but it's changing for some reason. Any ideas what is causing this?
Code Below:
- (void)viewDidLoad
{
[super viewDidLoad];
CALayer *l = [self.viewCase layer];
[l setMasksToBounds:YES];
[l setCornerRadius:30.0];
self.imgUserPhoto.userInteractionEnabled = YES;
[self.imgUserPhoto setClipsToBounds:NO];
UIRotationGestureRecognizer *rotationRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationDetected:)];
[self.view addGestureRecognizer:rotationRecognizer];
rotationRecognizer.delegate = self;
}
- (void)rotationDetected:(UIRotationGestureRecognizer *)rotationRecognizer
{
CGFloat angle = rotationRecognizer.rotation;
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, angle);
rotationRecognizer.rotation = 0.0;
}
You want to rotate the image around it's center, but that's not what it is actually happening. Rotation transforms take place around the origin. So what you have to do is to apply a translate transform first to map the origin to the center of the image, and then apply the rotation transform, like so:
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, self.imageView.bounds.size.width/2, self.imageView.bounds.size.height/2);
Please note that after rotating you'll probably have to undo the translate transform in order to correctly draw the image.
Hope this helps
Edit:
To quickly answer your question, what you have to do to undo the Translate Transform is to subtract the same difference you add to it in the first place, for example:
// The next line will add a translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, 10, 10);
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, radians);
// The next line will undo the translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, -10, -10);
However, after creating this quick project I realized that when you apply a rotation transform using UIKit (like the way you're apparently doing it) the rotation actually takes place around the center. It is only when using CoreGraphics that the rotation happens around the origin. So now I'm not sure why your image goes off the screen. Anyway, take a look at the project and see if any code there helps you.
Let me know if you have any more questions.
The 'Firefox' image is drawn using UIKit. The blue rect is drawn using CoreGraphics
You aren't rotating the image around its centre. You'll need correct this manually by translating it back to the correct position
I want to change the color where user touch on image. I got some code to get the image data which is below
NSString * path = [[NSBundle mainBundle] pathForResource:#"filename" ofType:#"jpg"];
UIImage * img = [[UIImage alloc]initWithContentsOfFile:path];
CGImageRef image = [img CGImage];
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(image));
const unsigned char * buffer = CFDataGetBytePtr(data);
I know I can easily get the touch point but my questions are below
As we know in retina display 1 point = 2 pixel so, do I know need to change the colour of 2 pixel for single touch point? Please correct me If I am wrong anywhere?
How to get this two pixel from image data?
Add a gesture recognizer to the UIImageView that presents the image. When that recognizer is triggered, the location you care about will be...
// self.imageView is where you attached the recognizer. This == gestureRecognizer.view
CGPoint imageLocation = [gestureRecognizer locationInView:self.imageView];
Resolving this location to a pixel location device independently can be done by determining the scale factor of the image.
To get the image location, apply that scale factor to the gesture location...
CGPoint pixel = CGPointMake(imageLocation.x*image.scale, imageLocation.y*image.scale)
This should be the correct coordinate for accessing the image. The remaining step is to get the pixel data. This post provides a reasonable-looking way to do that. (Also haven't tried this personally).
So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.
I know that Core Image on iOS 5.0 supports facial detection (another example of this), which gives the overall location of a face, as well as the location of eyes and a mouth within that face.
However, I'd like to refine this location to detect the position of a mouth and teeth within it. My goal is to place a mouth guard over a user's mouth and teeth.
Is there a way to accomplish this on iOS?
I pointed in my blog that tutorial has something wrong.
Part 5) Adjust For The Coordinate System: Says you need to change window's and images's coordinates but that is what you shouldn't do. You shouldn't change your views/windows (in UIKit coordinates) to match CoreImage coordinates as in the tutorial, you should do the other way around.
This is the part of code relevant to do that:
(You can get whole sample code from my blog post or directly from here. It contains this and other examples using CIFilters too :D )
// Create the image and detector
CIImage *image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:...
options:...];
// CoreImage coordinate system origin is at the bottom left corner and UIKit's
// is at the top left corner. So we need to translate features positions before
// drawing them to screen. In order to do so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
0, -imageView.bounds.size.height);
// Get features from the image
NSArray *features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {
// Get the face rect: Convert CoreImage to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(
faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView *faceView = [[UIView alloc] initWithFrame:faceRect];
...
if(faceFeature.hasMouthPosition) {
// Get the mouth position translated to imageView UIKit coordinates
const CGPoint mouthPos = CGPointApplyAffineTransform(
faceFeature.mouthPosition, transform);
...
}
}
Once you get the mouth position (mouthPos) you simply place your thing on or near it.
This certain distance could be calculated experimentally and must be relative to the triangle formed by the eyes and the mouth. I would use a lot of faces to calculate this distance if possible (Twitter avatars?)
Hope it helps :)
I'm trying to do something which should in theory be quite simple, but I've been chasing my tail around for days now. I'm trying to take a touch event from a screen overlay, capture an image, and crop a section of the image out around where the finger touched.
Now all my code is working fine, the overlay, events, cropping etc....however I can't seem to get the coordinates system of the touch event to match the coordinates system of the UIImage. I've read all the docs I can get my hands on, I just can't seem to figure it out.
My main question is, do I need to take into account UIImageOrientation when using CGImageCreateWithImageInRect, or does quartz figure it out? The reason I ask is I have a very simple routine that is cropping images just fine, but the cropped image never seems to be where my finger pressed??
The bulk of the routine is:
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = =[info objectForKey:#"UIImagePickerControllerOriginalImage"];
float scaleX = image.size.width / SCREEN_WIDTH;
float scaleY = image.size.height / SCREEN_HEIGHT;
//lastTouch is saved from touchesBegan method
float x = (lastTouch.x * scaleX) - (CROP_WIDTH/2);
float y = (lastTouch.y * scaleY) - (CROP_WIDTH/2);
if(x < 0) x = 0.0;
if(y < 0) y = 0.0;
CGRect cropArea = CGRectMake(x, y, CROP_WIDTH, CROP_WIDTH);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropArea);
UIImage *swatch = [UIImage imageWithCGImage:imageRef];
//at this point I'm just writing the images to the photo album to see if
//my crop is lining up with my touch
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIImageWriteToSavedPhotosAlbum(swatch, nil, nil, nil);
}
So, the problem is that my cropped area (as viewed in my photo album) never matches the actual area that I press (it's always some other random part of the photo), which makes me think my coordinates system is off.
Any pointers would be greatly appreciated, even if there just pointers to some docs I haven't found yet.
Cheers
Adam
You must always remember that the default coordinate system is different between CoreGraphics and UIKit. While UIKit begins with its origin at the top-left corner of the window, CoreGraphics has this point set at the bottom left. I think this might be enough info to put you on the right track, but just to re-iterate:
You can use this code to invert, or flip, your coordinate system.
CGContextTranslateCTM(graphicsContext, 0.0, drawingRect.size.height);
CGContextScaleCTM(graphicsContext, 1.0, -1.0);
Also review Flipping the Default Coordinate Sytem in the Drawing and Printing Guide for iOS