Hi I have hit a bit of a brick wall, I have been redesigning my menu navigation for my app. which I have managed to do. But now one of the features of my app has decided to stop functioning.
The idea is you shake your phone and it chooses a picture at random, the code separate from the app works fine, just as it has done on all previous versions of it, I even ran it quickly in it's present form on it's own just in case and it worked perfectly.
hopefully somebody can point me in the direction of where I have gone wrong.
my code is as follows;
import Foundation
import UIKit
class cocktailChoice: UIViewController {
#IBOutlet weak var drinkImage: UIImageView!
var drinkNamesArray:[String] = ["cocktailList0","cocktailList1","cocktailList2","cocktailList3","cocktailList4","cocktailList5","cocktailList6","cocktailList7","cocktailList8","cocktailList9","cocktailList10","cocktailList11","cocktailList12","cocktailList13","cocktailList14","cocktailList15","cocktailList16","cocktailList17","cocktailList18","cocktailList19","cocktailList20","cocktailList21","cocktailList22","cocktailList23","cocktailList24","cocktailList25","cocktailList26","cocktailList27","cocktailList28","cocktailList29","cocktailList30","cocktailList31"]
override func viewDidLoad() {
self.view.addGestureRecognizer(self.revealViewController().panGestureRecognizer())
}
override func motionEnded(_ motion: UIEventSubtype, with event: UIEvent?) {
if motion == .motionShake{
let firstRandomNumber = Int(arc4random_uniform(32))
let DrinkString:String = self.drinkNamesArray[firstRandomNumber]
self.drinkImage.image = UIImage(named: DrinkString)
}
}
}
it compiles with no errors, the IBOutlet is connect, it has no errors that i can see but the shake action doesn't want to play. getting very frustrating now.
Instead of this part if motion == .motionShake{ , can you try to use this:
if(event.subtype == .motionShake) {
print("Shake event!")
}
Related
I'm using QLPreviewController to show AR content. With the newer iPhones with LIDAR it seems that object occlusion is enabled by default.
Is there any way to disable object occlusion in the QLVideoController without having to build a custom ARKit view controller? Since my models are quite large (life-size buildings), they seem to disappear or get cut off at the end.
ARQuickLook is a library built for quick and high-quality AR visualization. It adopts RealityKit engine, so all supported here features, like occlusion, anchors, raytraced shadows, physics, DoF, motion blur, HDR, etc, look the same way as they look in RealityKit.
However, you can't turn on/off these features in QuickLook's API. They are on by default, if supported on your iPhone. In case you want to turn on/off People Occlusion you have to use ARKit/RealityKit frameworks, not QuickLook.
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = try! Experience.loadBox()
arView.scene.anchors.append(box)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
self.switchOcclusion()
}
fileprivate func switchOcclusion() {
guard let config = arView.session.configuration as?
ARWorldTrackingConfiguration
else { return }
guard ARWorldTrackingConfiguration.supportsFrameSemantics(
.personSegmentationWithDepth)
else { return }
switch config.frameSemantics {
case [.personSegmentationWithDepth]:
config.frameSemantics.remove(.personSegmentationWithDepth)
default:
config.frameSemantics.insert(.personSegmentationWithDepth)
}
arView.session.run(config)
}
}
Pay particular attention that People Occlusion is supported on A12 and later chipsets. And it works if you're running iOS 12 and higher.
P.S.
The only QuickLook's customisable object is an object from class ARQuickLookPreviewItem.
Use ARQuickLookPreviewItem class when you want to control the background, designate which content the share sheet shares, or disable scaling in case it's not appropriate to allow the user to scale a particular model.
This problem is caused by user interface interactions such as showing the titlebar while in fullsreen. That question's answer provides a solution, but not how to implement that solution.
The solution is to render on a background thread. The issue is, the code provided in Apple's is made to cover a lot of content so most of it will extraneous code, so even if I could understand it, it isn't feasible to use Apple's code. And I can't understand it so it just plain isn't an option. How would I make a simple Swift Metal game use a background thread being as concise as possible?
Take this, for example:
class ViewController: NSViewController {
var MetalView: MTKView {
return view as! MTKView
}
var Device: MTLDevice = MTLCreateSystemDefaultDevice()!
override func viewDidLoad() {
super.viewDidLoad()
MetalView.delegate = self
MetalView.device = Device
MetalView.colorPixelFormat = .bgra8Unorm_srgb
Device = MetalView.device
//setup code
}
}
extension ViewController: MTKViewDelegate {
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
func draw(in view: MTKView) {
//drawing code
}
}
That is the start of a basic Metal game. What would that code look like, if it were rendering on a background thread?
To fix that bug when showing the titlebar in Metal, I need to render it on a background thread. Well, how do I render it on a background thread?
I've noticed this answer suggests to manually redraw it 60 times a second. Presumably using a loop that is on a background thread? But that seems... not a clean way to fix it. Is there a cleaner way?
The main trick in getting this to work seems to be setting up the CVDisplayLink. This is awkward in Swift, but doable. After some work I was able to modify the "Game" template in Xcode to use a custom view backed by CAMetalLayer instead of MTKView, and a CVDisplayLink to render in the background, as suggested in the sample code you linked — see below.
Edit Oct 22:
The approach mentioned in this thread seems to work just fine: still using an MTKView, but drawing it manually from the display link callback. Specifically I was able to follow these steps:
Create a new macOS Game project in Xcode.
Modify GameViewController to add a CVDisplayLink, similar to below (see this question for more on using CVDisplayLink from Swift). Start the display link in viewWillAppear and stop it in viewWillDisappear.
Set mtkView.isPaused = true in viewDidLoad to disable automatic rendering, and instead explicitly call mtkView.draw() from the display link callback.
The full content of my modified GameViewController.swift is available here.
I didn't review the Renderer class for thread safety, so I can't be sure no more changes are required, but this should get you up and running.
Older implementation with CAMetalLayer instead of MTKView:
This is just a proof of concept and I can't guarantee it's the best way to do everything. You might find these articles helpful too:
I didn't try this idea, but given how much convenience MTKView generally provides over CAMetalLayer, it might be worth giving it a shot:
https://developer.apple.com/forums/thread/89241?answerId=268384022#268384022
Is drawing to an MTKView or CAMetalLayer required to take place on the main thread? and https://developer.apple.com/documentation/quartzcore/cametallayer/1478157-presentswithtransaction
class MyMetalView: NSView {
var displayLink: CVDisplayLink?
var metalLayer: CAMetalLayer!
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
setupMetalLayer()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
setupMetalLayer()
}
override func makeBackingLayer() -> CALayer {
return CAMetalLayer()
}
func setupMetalLayer() {
wantsLayer = true
metalLayer = layer as! CAMetalLayer?
metalLayer.device = MTLCreateSystemDefaultDevice()!
// ...other configuration of the metalLayer...
}
// handle display link callback at 60fps
static let _outputCallback: CVDisplayLinkOutputCallback = { (displayLink, inNow, inOutputTime, flagsIn, flagsOut, context) -> CVReturn in
// convert opaque context pointer back into a reference to our view
let view = Unmanaged<MyMetalView>.fromOpaque(context!).takeUnretainedValue()
/*** render something into view.metalLayer here! ***/
return kCVReturnSuccess
}
override func viewDidMoveToWindow() {
super.viewDidMoveToWindow()
guard CVDisplayLinkCreateWithActiveCGDisplays(&displayLink) == kCVReturnSuccess,
let displayLink = displayLink
else {
fatalError("unable to create display link")
}
// pass a reference to this view as an opaque pointer
guard CVDisplayLinkSetOutputCallback(displayLink, MyMetalView._outputCallback, Unmanaged<MyMetalView>.passUnretained(self).toOpaque()) == kCVReturnSuccess else {
fatalError("unable to configure output callback")
}
guard CVDisplayLinkStart(displayLink) == kCVReturnSuccess else {
fatalError("unable to start display link")
}
}
deinit {
if let displayLink = displayLink {
CVDisplayLinkStop(displayLink)
}
}
}
My Experience.rcproject has animations that can be triggered by tap action.
Two cylinders are named “Button 1” and “Button 2” and have Collide turned on.
I am using Async method to load Experience.Map scene and addAnchor method to add mapAnchor to ARView in a ViewController.
I tried to run HitTest on the scene to see if the app reacts properly.
Nonetheless, the HitTest result prints the entity name of a button even when I am not tapping on it but area near it.
class augmentedReality: UIViewController {
#IBOutlet weak var arView: ARView!
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
let tapLocation = sender.location(in: arView)
// Get the entity at the location we've tapped, if one exists
if let button = arView.entity(at: tapLocation) {
// For testing purposes, print the name of the tapped entity
print(button.name)
}
}
}
Below is my attempt to add the AR scene and tap gesture recogniser to arView.
class augmentedReality: UIViewController {
arView.scene.addAnchor(mapAnchor)
mapAnchor.notifications.hideAll.post()
mapAnchor.notifications.mapStart.post()
self.arView.isUserInteractionEnabled = true
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(onTap))
self.arView.addGestureRecognizer(tapGesture)
}
Question 1
How can I achieve the goal of only having the entity name of a button printed when I am really tapping on it instead of close to it?
Question 2
Do I actually need to turn Collide on to have both buttons able to be detected in the HitTest?
Question 3
There’s an installGestures method. There’s no online tutorials or discussions about this at the moment. I tried but I am confused by (Entity & HasCollision). How can this method be implemented?
To implement a robust Hit-Testing in RealityKit, all you need is the following code:
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let scene = try! Experience.loadScene()
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
let tapLocation: CGPoint = sender.location(in: arView)
let result: [CollisionCastHit] = arView.hitTest(tapLocation)
guard let hitTest: CollisionCastHit = result.first
else { return }
let entity: Entity = hitTest.entity
print(entity.name)
}
override func viewDidLoad() {
super.viewDidLoad()
scene.steelBox!.scale = [2,2,2]
scene.steelCylinder!.scale = [2,2,2]
scene.steelBox!.name = "BOX"
scene.steelCylinder!.name = "CYLINDER"
arView.scene.anchors.append(scene)
}
}
When you tap on entities in ARView a Debug Area prints "BOX" or "CYLINDER". And if you tap anything but entities, a Debug Area prints just "Ground Plane".
If you need to implement a Ray-Casting read this post, please.
P.S.
In case you run this app on macOS Simulator, it prints just Ground Plane instead of BOX and CYLINDER. So you need to run this app on iPhone.
The following attributes are returning false for me, but I am not able to understand why.
ARImageTrackingConfiguration.isSupported
ARWorldTrackingConfiguration.isSupported
I am testing it on a iPhone Xs with iOS 12.1.1, with the code built with Xcode 10.1.
Note that ARConfiguration.isSupported does return true.
Any ideas why this might be happening?
Only one ARTracking class is supported per a given time.
You should write your code this way:
import UIKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
var configuration: ARConfiguration?
//.........
//.........
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if ARWorldTrackingConfiguration.isSupported {
configuration = ARWorldTrackingConfiguration() // 6-DoF
} else {
configuration = AROrientationTrackingConfiguration() // 3-DoF
}
sceneView.session.run(configuaration!)
}
}
Also, read carefully about these 3 types of tracking configuration:
ARWorldTrackingConfiguration() (rotation and translation x-y-z) 6-DoF
AROrientationTrackingConfiguration() (only rotation x-y-z) 3-DoF
ARImageTrackingConfiguration() 6-DoF but image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera.
Because 3-DoF tracking creates limited AR experiences, you should generally not use the AROrientationTrackingConfiguration() class directly. Instead, use the subclass ARWorldTrackingConfiguration() for tracking with six degrees of freedom (6-DoF), plane detection, and hit-testing. Use 3-DoF tracking only as a fallback in situations where 6-DoF tracking is temporarily unavailable.
Hope this helps.
I am making a simple animation with a bear moving back and forth. My issue is that my code doesn't recognise my atlas folder (BearImages.atlas) or doesn't recognise the images into it. I don't know what I am doing wrong and I can't figure out. Can you explain to me as I am 5 years old, why xCode doesn't recognise my folder or my images into it?
ScreenShot:
Click to open Image
My code:
import SpriteKit
class GameScene: SKScene {
var bear : SKSpriteNode!
var bearWalkingFrames : [SKTexture]!
override func didMoveToView(view: SKView) {
backgroundColor = (UIColor.blackColor())
let bearAnimatedAtlas = SKTextureAtlas(named: "BearImages")
var walkFrames = [SKTexture]()
print(bearAnimatedAtlas)
let numImages = bearAnimatedAtlas.textureNames.count
for var i=1; i<=numImages/2; i++ {
let bearTextureName = "bear\(i)"
walkFrames.append(bearAnimatedAtlas.textureNamed(bearTextureName))
}
bearWalkingFrames = walkFrames
let firstFrame = bearWalkingFrames[0]
bear = SKSpriteNode(texture: firstFrame)
bear.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
addChild(bear)
walkingBear()
}
func walkingBear() {
//This is our general runAction method to make our bear walk.
bear.runAction(SKAction.repeatActionForever(
SKAction.animateWithTextures(bearWalkingFrames,
timePerFrame: 0.1,
resize: false,
restore: true)),
withKey:"walkingInPlaceBear")
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
}
override func update(currentTime: CFTimeInterval) {
}
}
Thank you!
This may not solve your issue, but it is probably the most likely cause...
Click on your project "pierro peguin learning" within the project navigator window on the left
Click on the Build Phases tab
Under the Copy Bundle Resources section, make sure BearImages.atlas is there
If it is not, add it with the plus button
Your code seems find so I presume it must be the image names or you didn't copy the atlas folder into your project (you just referenced it)
1st thing you should do is to use xCode 7 new atlas feature to store your atlas in the asset catalogue instead of just copying an atlas folder into the project. Try that first and see if it makes a difference.
Apple also recommends this approach for performance and better organisation.
If that doesnt work....
1) What formats are your images in? They normally should be in PNG format.
2) Can you show the content of the atlas folder?
3) Are you not seeing any animations at all or do you get the white square with the red X?
Generally, and not specific to your question, why are you creating 2 arrays for the images?
Just use the bearWalkingFrames array and append the images directly. Seems complicated to create a second array (walkFrames) append the images and than set bearWalkingFrames to that array.