I am showing geo-referenced images on an mglmapview using an MGLStyleLayer created with an MGLImageSource that specifies a UIImage that has a height of 3000. The MGLStyleLayer is added to the map using [map.style insertlayer ...];
This is working fine for all my UIImages but they all have heights < 3000. With this 3000 height UIImage I see a garbage image on my map. It contains snippets of other images I use on the map (mostly images used in point annotations). If I shrink the UIImage down to a height of 2000 it works OK.
My openGL is a little rusty but it looks to me like the mapview is using a texture array for all these images and the array has a max image height that I am overstepping. Just a guess.
Is there a known max height for images used in an MGLImageSource? Is there a way to get around it?
Related
I have spent hours on Google searching for an answer to this and trying pieces of code but I have just not been able to find one. I also recognise that this is a question that has been asked lots of times, however I do not know what else to do now.
I have access to 500x500 pixel rainfall radar images from the Met Offices' DataPoint API, covering the UK. They must be displayed in a 640x852 pixel area (an NSImageView, which I currently have the scaling property set to axis independent) because this is the correct size of the map generated for the boundaries covered by the imagery. I want to display them at the enlarged size of 640x852 using the nearest neighbour algorithm and in an aliased format. This can be achieved in Photoshop by going to Image > Image Size... and setting resample to nearest neighbour (hard edges). The source images should remain at 500x500 pixels, I just want to display them in a larger view.
I have tried setting the magnificationFilter of the NSImageView.layer to all three of the different kCAFilter... options but this has made no difference. I have also tried setting the shouldRasterize property of the NSImageView.layer to true, which also had no effect. The images always end up being smoothed or anti-aliased, which I do not want.
Having recently come from C#, there could be something I have missed as I have not been programming in Swift for very long. In C# (using WPF), I was able to get what I want by setting the BitmapScalingOptions of the image element to NearestNeighbour.
To summarise, I want to display a 500x500 pixel image in a 640x852 pixel NSImageView in a pixelated form, without any kind of smoothing (irrespective of whether the display is retina or not) using Swift. Thanks for any help you can give me.
Below is the image source:
Below is the actual result (screenshot from a 5K iMac):
This was created by simply setting the image property on an NSImageSource with the tableViewSelectionDidChange event of my NSTableView used to select the times to show the image for, using:
let selected = times[timesTable.selectedRow]
let formatter = NSDateFormatter()
formatter.dateFormat = "d/M/yyyy 'at' HH:mm"
let date = formatter.dateFromString(selected)
formatter.dateFormat = "yyyyMMdd'T'HHmmss"
imageData.image = NSImage(contentsOfFile: basePathStr +
"RainObs_" + formatter.stringFromDate(date!) + ".png")
Below is what I want it to look like (ignoring the background and cropped out parts). If you save the image yourself you will see it is pixellated and aliased:
Below is the map that the source is displayed over (the source is just in an NSImageView laid on top of another NSImageView containing the map):
Try using a custom subclass of NSView instead of an NSImageView. It will need an image property with a didSet observer that sets needsDisplay. In the drawRect() method, either:
use the drawInRect(_:fromRect:operation:fraction:respectFlipped:hints:) method of the NSImage with a hints dictionary of [NSImageHintInterpolation:NSImageInterpolation.None], or
save the current value of NSGraphicsContext.currentContext.imageInterpolation, change it to .None, draw the NSImage with any of the draw...(...) methods, and then restore the context's original imageInterpolation value
The short version: How do I know what region of a UIImageView contains the image, and not aspect ratio padding?
The longer version:
I have a UIImageView of fixed size as pictured:
I am loading photos into this UIViewController, and I want to retain the original photo's aspect ratio so I set the contentMode to Aspect Fit. This ends up ensuring that the entire photo is displayed within the UIImageView, but with the side effect of adding some padding (configured in red):
No problem so far.... But now I am doing face detection on the original image. The face detection code returns a list of CGRects which I then render on top of the UIImageView (I have a subclassed UIView and then laid out an instance in IB which is the same size and offset as the UIImageView).
This approach works great when then photo is not padded out to fit into UIImageView. However if there is padding, it introduces some skew as seen here in green:
I need to take the image padding into account when rendering the boxes, but I do not see a way to retrieve it.
Since I know the original image size and the UIImageView size, I can do some algebra to calculate where the padding should be. However it seems like there is probably a way to retrieve this information, and I am overlooking it.
I do not use image views often so this may not be the best solution. But since no one else has answered the question I figured I'd through out a simple mathematical solution that should solve your problem:
UIImage *selectedImage; // the image you want to display
UIImageView *imageView; // the imageview to hold the selectedImage
NSInteger heightOfView = imageView.frame.size.height;
NSInteger heightOfPicture = selectedImage.size.height;
NSInteger yStartingLocationForGreenSquare; // set it to whatever the current location is
// take whatever you had it set to and add the value of the top padding
yStartingLocationForGreenSquare += (heightOfView - heightOfPicture) / 2;
So although there may be other solutions this is a pretty simple math formula to accomplish what you need. Hope it helps.
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.
I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.