Xcode AR Reference Image is too similar - swift

I have an augmented reality app that plays a video on top of images and it was working very well, but as soon as I neared 50+ images, I started getting an error on some of the images, "AR reference image 'x' is too similar to 'y'." I am panicking because I need this done quickly and this error appears at random for no apart reason. In the linked picture, the reference images are clearly not similar in any way and when I even change the name of one of the pictures, it resolves itself at first until more issues of the same error come up on different reference images. If anyone has any theories or questions, please post them here! Thank you so much to anyone who can shine some light on this issue!
Image of AR Reference Image folder with error on pictures:
https://imgur.com/a/U3dlFef
Update: changed every image to be number 1-39 and the same images that in the last picture had errors still had errors so it must be something related to the pictures themselves. Still confused how though. Tried deleting every reference image and reuploaded exact same images and after giving each it's physical dimensions, same error popped up for 2 images still.
Is it possible to upload this update to Apple with this error and them allow it to go through? I did a test upload to my device and tested all images with errors and they all work as intended. I currently have no solution to a problem that seems very superficial. Thanks again!

ARReference image is created from these images and even so your human eyes sees fully different images, after parsing, it might be that the detected structure is too similar (because it is not looking for exact pixel images rather characteristics of given image). This is why your problem appears after increased number of images (bigger chance of similar characteristics). So your best bet might be to use less images or if not possible, change images until all warning disappear.
If you don't use these images at the same time (Same AR session), like maybe you have some kind of selection in your application, you can try that you use a simple assets catalog for the images and loading these reference images from code. I use this method because our application downloads simple images for markers and I create the reference image programatically.
private func startTracking(WithMarkerImage marker: UIImage) {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
var orientation: CGImagePropertyOrientation!
switch marker.imageOrientation {
case .up:
orientation = CGImagePropertyOrientation.up
case .down:
orientation = CGImagePropertyOrientation.down
case .left:
orientation = CGImagePropertyOrientation.left
case .right:
orientation = CGImagePropertyOrientation.right
case .upMirrored:
orientation = CGImagePropertyOrientation.upMirrored
case .downMirrored:
orientation = CGImagePropertyOrientation.downMirrored
case .leftMirrored:
orientation = CGImagePropertyOrientation.leftMirrored
case .rightMirrored:
orientation = CGImagePropertyOrientation.rightMirrored
}
let referenceImage = ARReferenceImage(marker.cgImage!, orientation: orientation, physicalWidth: 0.15)
configuration.detectionImages = [mareferenceImageker]
configuration.environmentTexturing = .none
configuration.isLightEstimationEnabled = true
_arSession.run(configuration, options: [.removeExistingAnchors, .resetTracking])
}

Related

Camera Zoom Issue on iPhone X, iPhone XS etc

The main issue I am having is with my camera. The main issue is that the camera is zoomed in too far on the iPhone X, Xs, Xs Max, and XR Models.
My camera is a full screen camera which is okay on the smaller iPhones but once I get to the models mentioned above the camera seems to be stuck on the max zoom level. What I really want to accomplish is a nature similar to how instagrams camera works. Where it is full screen on all models up until the iPhone X series and then seemingly respect the edge insets or if it is going to be full screen I want it to not be zoomed in so far the way it is now.
My thought process is to use something like this.
Determine the device. I figure I can use something like Device Guru which can be found here to determine the type of device.
GitHub repo can be found here --> https://github.com/InderKumarRathore/DeviceGuru
Using this tool or a similar tool I should be able to get the screen dimensions for the device. Then I can do some type of math to determine the proper screen size for the camera view.
Assuming DeviceGuru didn't work I would just use something like this to get the width and height of the screen.
// Screen width.
public var screenWidth: CGFloat {
return UIScreen.main.bounds.width
}
// Screen height.
public var screenHeight: CGFloat {
return UIScreen.main.bounds.height
}
This is the block of code I am using to fill the camera. However I want to turn into something that is based on the device size as opposed to just filling it despite the phone
import Foundation
import UIKit
import AVFoundation
class PreviewView: UIView {
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
}
layer.videoGravity = AVLayerVideoGravity.resizeAspectFill
layer.connection?.videoOrientation = .portrait
return layer
}
var session: AVCaptureSession? {
get {
return videoPreviewLayer.session
}
set {
videoPreviewLayer.session = newValue
}
}
// MARK: UIView
override class var layerClass: AnyClass {
return AVCaptureVideoPreviewLayer.self
}
}
I want my camera took look something like this
or this
Not this ( what my current camera looks like )
I have looked at many questions and nobody has a concrete solution so please don't mark it as a duplicate and please don't say it's just an issue with the iPhone X series.
Firstly, you need to update your code with the apt information, since this will give a vague idea to a anyone else with less experience.
Looking at the Images it is clear that the type of camera you are trying to access is quite different than the one you have. With introduction of iPhone 7+ and iPhone X, apple has introduced many different camera devices to the users. All these are accesible through AVCaptureDevice.DeviceType.
So by looking at what you want to achieve, it is clear that you want more field of view within the screen. This is accessible by .builtInWideAngleCamera property of the above given capture device. Changing it to this will solve your problem.
Cheers

Saving UIImage to plist problems (SWIFT)

Simply stated,
I can encode a UIImage into a .plist if that image is selected from a UIImagePickerController (camera or photo library) and then stored into an object's instance variable using NSKeyedArchiver...
func imagePickerController(picker: UIImagePickerController!, didFinishPickingImage image: UIImage!, editingInfo: NSDictionary!) {
let selectedImage : UIImage = image
myObject.image = selectedImage
...
}
I can NOT, however, encode a UIImage into a .plist if that image is one existing in my app bundle and assigned to a variable like this...
myObject.image = UIImage(named: "thePNGImage")
...where thePNGImage.png lives in my apps bundle. I can display it anytime in my app, but I just can't store it and recover it in a plist!
I want a user to select a profile image from his or her camera or photo library, and assign a default image from my app bundle to their profile should they not choose to select one of their own. The latter is giving me issues.
Any help would be greatly appreciated.
Thanks!
First of all, you don't need any plist. If you want to save the user's preferences, use NSUserDefaults (which is a plist, but it is maintained for you).
Second, you should not be saving an image into a plist. Save a reference to an image, e.g. its URL. That way when you need it again you can find it again. An image is huge; a URL is tiny.
Finally, from the question as I understand it, you want to know whether the user has chosen an image and, if not, you want to use yours as a default. The NSUserDefault could contain a Bool value for this, or you could just use the lack of an image URL to mean "use the default".
But in any case you are certainly right that it is up to you to recover state the next time the app launches based on whatever information you have previously saved. Nothing is going to happen magically by itself. If your image is not magically coming back the next time you launch, that is because you are not bringing it back. Your app has to have code / logic to do that as it launches and creates the interface.
Still having the issue? BTW, I've confirmed your issue. Here's a workaround:
func imageFromXcassets(name: String) -> UIImage
{
// Load image from Xcassets
let myImage = UIImage(named: name)!
// Write image to file
let imagePath = NSHomeDirectory().stringByAppendingPathComponent("Documents/test.png")
UIImagePNGRepresentation(myImage).writeToFile(imagePath, atomically: true)
// Get image from file and return
return UIImage(contentsOfFile: imagePath)!
}
Given the name of an image stored in Images.xcassets, the method returns the UIImage after first writing the image to a file, then reading the image from the file. Now the image can be written successfully to NSUserDefaults just like images obtained from your imagePickerController.
Please note that the example isn't handling optionals like it should. (I'm an optimistic guy.)
ADDITION BELOW
Note on Leonardo’s response:
The originating question asks how to encode an image obtained from Images.xcassets via UIImaged(named: String). I don’t believe Leonardo’s response provides a solution for this.

Getting difficulty in reading image from Disk for iphone in Unity

I am developing a unity app, in which i call webservice for image url and after getting those url, i call one by one url for image downloading and store those images onto disks, and after in some point i read those images from disks and show as texture, but i am getting problem in reading images. It show me Question mark on texture and when i dig more to find out problem i got that i am getting zero of image using www.size and text also nil using www.text. I am doing following for reading and writing images.
Writing
if(wwwMarker.isDone)
File.WriteAllBytes(Application.persistentDataPath + "/"+ data.markerName + ".jpg", wwwMarker.bytes);
Reading
//fileurl is string which contain path of file
fileUrl = (Application.persistentDataPath + "/"+ markerDataObject.markerName + ".jpg");
if(System.IO.File.Exists(fileUrl))
if(www.isDone)
video.mIconPlane.renderer.material.mainTexture = imageToLoadPath.texture;
But when i read this code and show render image on texture it show me Question mark image, but when i load images from assets it works perfectly fine. Please help me that where i am doing wrong. I am nee bee in unity so thats why doing silly mistakes. This will be great for me. Thanks in advance.
Note, that unity's WWW can only dowload and save as textures JPG and PNG images. If you will try to download an image of any other format, you will get a red "?" image as result.

How do I create an AVAsset with a UIImage captured from a camera?

I am a newbie trying to capture camera video images using AVFoundation and
want to render the captured frames without using AVCaptureVideoPreviewLayer. I
want a slider control to be able to slow down or speed up the rate of display of
camera images.
Using other peoples code as examples, I can capture images and using an NSTimer,
with my slider control can define on the fly how often to display them, but I
can't convert the image to something I can display. I want to move these
images into a UIView or UIImageView and render them in the timer Fire function.
I have looked at Apples AVCam app, (which uses an AVCaptureVideoPreviewLayer)
but because it has its own built in AVCaptureSession, I can't adjust how often the
images are displayed. (well, you can adjust the preview layer frame rate but
that can't be done on the fly)
I have looked at the AVFoundation programming guide, which talks about AVAssets
and AVPlayer, etc. but I can't see how a camera image can be turned into an
AVAsset. When I look at the AVFoundation guide, and other demos which show how
to define an AVAsset, it only gives me choices of using http stream data to
create the asset, or a url to define an asset using an existing file. I can't
figure out how to make my captured UIImage into an AVAsset, in which case I guess
I could use an AVPlayer, AVPlayerItems and AVAssetTracks to show the image with
an observeValueForKeyPath function checking status and doing [myPlayer play].
(I also studied the WWDC session 405 "Exploring AV Foundation" to see how that
is done)
I have tried similar code as in the WWDC Session 409 "Using the Camera on iPhone."
Like that myCone demo, I can set up the device, the input, the capture session,
the output, the setting up of a callback function to a CMSampleBuffer, and I
can collect UIImages and size them, etc. At this point I want to send that image
to a UIView or UIimageView. The session 409 just talks about doing it with
CFShow(sampleBuffer). This wasn't explained, and I guess its just assuming a
knowledge of Core Foundation I don't yet have. I think I am turning the captured
output in the sample buffer into a UIImage, but I can't figure out how to render
it. I created an IBOutlet UIImageView in my nib file, but when I try to stuff
the image into that view, nothing gets displayed. Do I need an AVPlayerLayer?
I have looked at the UIImagePickerViewController as an alternate method of
controlling how often I display captured camera images, and I dont see that I
can change the time on the fly to display images using that controller either.
So, as you can see, I am learning this stuff with the Apple development forum and
their documentation, the WWDC videos, and various websites such as
stackoverflow.com but have yet to see any examples of doing camera to screen
without using AVCaptureVideoPreviewLayer, UIImagePickderViewController or by
using an AVAsset that isnt already a file or http stream.
Can anybody make a suggestion? Thanks in advance.

How do I test a camera in the iPhone simulator?

Is there any way to test the iPhone camera in the simulator without having to deploy on a device? This seems awfully tedious.
There are a number of device specific features that you have to test on the device, but it's no harder than using the simulator. Just build a debug target for the device and leave it attached to the computer.
List of actions that require an actual device:
the actual phone
the camera
the accelerometer
real GPS data
the compass
vibration
push notifications...
I needed to test some custom overlays for photos. The overlays needed to be adjusted based on the size/resolution of the image.
I approached this in a way that was similar to the suggestion from Stefan, I decided to code up a "dummy" camera response.
When the simulator is running I execute this dummy code instead of the standard "captureStillImageAsynchronouslyFromConnection".
In this dummy code, I build up a "black photo" of the necessary resolution and then send it through the pipelined to be treated like a normal photo. Essentially providing the feel of a very fast camera.
CGSize sz = UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation]) ? CGSizeMake(2448, 3264) : CGSizeMake(3264, 2448);
UIGraphicsBeginImageContextWithOptions(sz, YES, 1);
[[UIColor blackColor] setFill];
UIRectFill(CGRectMake(0, 0, sz.width, sz.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
The image above is equivalent to a 8MP photos that most of the current day devices send out. Obviously to test other resolutions you would change the size.
I never tried it, but you can give it a try!
iCimulator
Nope (unless they've added a way to do it in 3.2, haven't checked yet).
I wrote a replacement view to use in debug mode. It implements the same API and makes the same delegate callbacks. In my case I made it return a random image from my test set. Pretty trivial to write.
A common reason for the need of accessing the camera is to make screenshots for the AppStore.
Since the camera is not available in the simulator, a good trick ( the only one I know ) is to resize your view at the size you need, just the time to take the screenshots. You will crop them later.
Sure, you need to have the device with the bigger screen available.
The iPad is perfect to test layouts and make snapshots for all devices.
Screenshots for iPhone6+ will have to be stretched a little ( scaled by 1,078125 - Not a big deal… )
Good link to a iOS Devices resolutions quick ref : http://www.iosres.com/
Edit : In a recent project, where a custom camera view controller is used, I have replaced the AVPreview by an UIImageView in a target that I only use to run in the simulator. This way I can automate screenshots for iTunesConnect upload. Note that camera control buttons are not in an overlay, but in a view over the camera preview.
The #Craig answer below describes another method that I found quite smart - It also works with camera overlay, contrarily to mine.
Just found a repo on git that helps Simulate camera functions on iOS Simulator with images, videos, or your MacBook Camera.
Repo