ARKit - How do I get a UIImage from the camera without the overlay? - arkit

I'm trying to get snapshots from the screen. I don't necessarily need the full hi-res stills, just the image from the viewport without the 3D models I've overlaid already.

If you mean the image that is captured by the camera, it's available under
sceneView.session.currentFrame?.capturedImage
https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage
The pixel buffer is in the YCbCr format so you might need to access the luma and chroma planes and convert them to RGB depending on what kind of image you're trying to produce. For a CGImage/UIimage I think it's pretty straightforward: How to turn a CVPixelBuffer into a UIImage?

Related

How to properly scale camera feed and segmentation in Spark AR

I am working on a Spark AR lens where I need the camera feed of the user to be scaled down to about half the normal size. I have figured out how to scale the camera texture and the segmentation texture to fit my need.
The issue I'm having now is that the pixels around the edge of the newly updated scale get stretched to the size of the parent canvas as you can see in this image:
My patches can be seen here:
How can I prevent this pixel stretching from showing outside of the red square in image 1?
I simple solution would be to apply the camera texture to a rect, and scale the rect down instead. Another way to do this would be to create a custom shader, but that's more complicated.
Here's a working example that uses no patches and no code:

Confusion: SKSpriteNode & SKTexture difference.

I am confused with SKSpriteNode & SKTexture Node. I have seen in the tutorials that SKSpriteNode can be used to add image like [SKSpriteNode spritenodewithimagename:#"someimag"]; and same thing is happening in SKTexture as [SKTexture texturewithimge/file];
what is difference between a Texture and Image. If we are adding image by using SKSpriteNode then what is the reason to use SKTexture or if we use SKTexture and Texture Atlases then why we added image to be added in SKSpriteNode.
Confusion is there, what is difference between both of them.
SKSpriteNode is a node that displays (renders) a SKTexture on screen at a given position, with optional scaling and stretching.
SKTexture is a storage class that contains an image file in a format that is suitable for rendering, plus additional information like the frame rectangle if the texture references only a smaller portion of the image / texture atlas.
One reason for splitting the two is that you usually want multiple sprites to draw with the same SKTexture or from the same SKTextureAtlas. This avoids having to keep copies of the same image in memory for each individual sprite, which would easily become prohibitive. For example a 4 MB texture used by 100 Sprites still uses 4 MB of memory, as opposed to 400 MB.
Update to answer comment:
The term 'texture' dates back to the 70s.
A texture is an in-memory representation of an image formatted specifically for use in rendering. Common image formats (PNG, JPG, GIF, etc.) don't lend themselves well for rendering by a graphics chip. Textures are an "image format" that graphics hardware and renderers such as OpenGL understand and have standardized.
If you load a PNG or JPG into a texture, the format of the image changes. It's color depth, alpha channel, orientation, memory layout, compression method may change. Additional data may be introduced such as mip-map levels, which is the original texture scaled down by a certain percentage in order to draw farther-away polygons with a lower resolution version of the same texture, which decreases aliasing and speeds up rendering.
That's only scratching the surface though. What's important to keep in mind is that no rendering engine works with images directly, they're always converted into textures. This has mainly to do with efficiency of the rendering process.
Whenever you specify an image directly in an API such as Sprite Kit, for example spriteWithImageNamed: then internally what happens is that the renderer first checks if there's an existing texture with the given image name, and if so, uses that. If there's no such image loaded yet, it will load the image, convert it to a texture, and store it with the image name as key for future reference (this is called texture caching).

iPhone - save image at higher resolution without pixelating it

I am using the image picker controller to get an image from the user. After some operations on the image, I want the user to be able to save the image at 1600x1200 px, 1024x1024 px or 640x480 px (something like iFlashReady app).
The last option is the size of image I get in the UIImagePickerControllerDelegate method (when using image from camera roll)
Is there any way we can save the image at these resolutions without pixelating the images?
I tried creating a bitmap context with the width and height I want (CGBitmapContextCreate) and drawing the image there. But the image gets pixelated at 1600x1200.
Thanks
This is non-trivial. Your image just doesn't have enough data. To enlarge it you'll need to resample the image and interpolate between pixels (like photoshop when you resize an image).
Most likely you'll want to use a 3rd party library such as:
http://code.google.com/p/simple-iphone-image-processing/
This performs this and many other image processing functions.
From faint memories of computer vision class from long ago, I think what you do is to blur the image after the up-convert.
Before drawing try adjusting your CGBitmapContext's antialiasing and/or interpolation quality:
CGContextSetShouldAntialias( context, 1 == 1 )
CGContextSetInterpolationQuality( context, kCGInterpolationHigh ) ;
If I remember right, antialiasing is turned off on CGContext by default.

Best way to copy a PNG into a bigger blank/code generated texture? (OpenGL ES 1.1/iPhone)

I have a 320x480 PNG that I would like to texture map/manipulate on the iPhone but these dimensions are obviously not power of 2. I already tested my texture map manipulation algorithm on a 512x512 PNG that is a black background with a 320x480 image superimposed on it, centered at the origin (lower left corner (0,0)) where the 320x480 area is properly oriented/centered/scaled on the iPhone screen.
What I would like to do now is progress to the point where I can take 320x480 source images and apply them to a blank/black background 512x512 texture generated in code so that the two would combine as one texture so that I can apply the vertices and texture coordinates I used in my 512x512 test. This will be eventually used for camera captured and camera roll sourced images, etc.
Any thoughts? (must be for OpenGL ES 1.1 without use of GL util toolkit, etc.).
Thanks,
Ari
One method I've found to work is to simply draw both images into the current context and then extract the resulting combined image. Is there another way that is more geared towards OpenGL that may be more efficent?
// CGImageRef for background image
// CGImageRef for foreground image
// CGSize for current context
// Define CGContextRef for current context
// UIGraphicsBeginImageContext using CGSize
// Get value for current context with UIGraphicsGetCurrentContext()
// Define 2 rectangles, one for the background and one for the foreground images
// CGContextDrawImage(currentContext, backgroundRect, backgroundImage);
// CGContextDrawImage(currentContext, foregroundRect, foregroundImage);
// UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
// spriteImage = finalImage.CGImage();
// UIGraphicsEndImageContext();
At this point you can proceed to use spriteImage as the image source for the texture and it will be a combination of a blank 512x512 PNG with a 320x480 PNG for example.
I'll replace the 512x512 blank PNG with an image generated in code but this does work.

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.