I am using the following code in iOS 10.0 in my GameScene.swift
//Shape storage
let playerShape = SKShapeNode(circleOfRadius: 10 )
...Color setup etc
//Get the texture from shape node
let playerTexture = view?.texture(from: playerShape)
The same code doesn't work in watchOS 3.0
Xcode 8.0 beta 2 complaints about the view:
Use of unresolved identifier 'view'
Does anyone know whats the equivalent of view in watchOS?
Thank you.
As mentioned above there are no views in Apple Watch's Sprite Kit.
So that is why you are unable to use view in your code.
Just use SKSpriteNode instead and do something like this if you want the texture for something else.
E.g. here I want to use my circleOriginal texture on circle 2
let circleOriginal = SKSpriteNode(imageNamed: "circle")
let circleTexture = circleOriginal.texture
let circle2 = SKSpriteNode(texture: circleTexture)
If you want an amazing overview of Apple Watch Game Tech check out Apple's WWDC lecture on it (The slides below are from there). This explains a lot and provides great code examples.
https://developer.apple.com/videos/play/wwdc2016/612/
Here are the key differences.
Here's Apple's Scene Graph for non-watchOS
Here's Apple's Scene Graph corresponding Scene Graph for watchOS
Here are the recommended alternatives for watchOS's SpriteKit / SceneKit
Related
I have a Reality Composer file with several scenes, all of which starts empty and then some models appear one by one every second. Even though the animation works perfectly in Quicklook and Reality Composer, it does a strange glitch when I try to integrate it in my app with Xcode. When the very first scene is launched or when we go to another scene, they don't start empty.. For a tiny split second, we see all the models of that scene being displayed, only to disappear immediately.
Then, we see them appearing slowly as they were supposed to. That tiny flash of models at the beginning of each scene is ruining everything. I tried using a .reality file and a .rcproject file, same problem. In fact, when we preview the animation of the Reality file inside Xcode, it does the same glitch. I tried using different Hide and Show functions, no change.. I tried different triggers such as notifications, scene start, on tap, no change.
I checked quite some tutorials and still couldn't find anything wrong in what I'm doing. I almost feel like there is a glitch in the current integration of Reality Composer. I would really appreciate some ideas on the subject...
Try this to prevent a glimpse...
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var box: ModelEntity!
override func viewDidLoad() {
super.viewDidLoad()
let scene = try! Experience.loadScene()
self.box = scene.cube!.children[0] as? ModelEntity
self.box.isEnabled = false
arView.scene.anchors.append(scene)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.box.isEnabled = true
}
}
}
In such a scenario glitching occurs only for sphere. Box object works fine.
#AndyJazz Thanks. This solution works for me. Alternately to the line :
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.box.isEnabled = true
I suggest (while in Reality Composer) Create a Behavior with:
Trigger Scene Start
Action Show
The appearance of the entity can then also be tweaked with a Motion Type, Ease Type and Style as well as chained to additional sequences.
I've created a ".sks" file.
I've configured the effect in the editor.
I've added import SpriteKit in a swift view file.
Apple's documentation shows this snippet (but I don't know how to use it properly):
func newSmokeEmitter() -> SKEmitterNode? { return SKEmitterNode(fileNamed: "smoke.sks") }
I've seen this and this, but they didn't solve my problem either. Does SpriteKit require that you configure your app as a) a game and b) SpriteKit, or can it be done in my single-page app? Thanks for your help!
I have built an app that uses image tracking and swaps flat images. I am also using people occlusion (now) so people can get photos in front of those images. I really want this app to have a selfie mode, so people can take their own pictures in front of image swapped areas.
I'm reading the features on ARKit 3.5, but as far as I can tell, the only front-facing camera support is with ARFaceTrackingConfiguration, which doesn't support image tracking. ARImageTrackingConfiguration and ARWorldTrackingConfiguration only use the back camera.
Is there any way to make a selfie mode with people occlusion (and image tracking) using the front-facing camera?
The answer is NO, you can't use any ARConfiguration except ARFaceTrackingConfiguration for front camera. Although, you can simultaneously use ARFaceTrackingConfiguration on the front camera and ARWorldTrackingConfiguration on the rear camera. This allows users interact with AR content in the rear camera using their face as certain controller.
Look at this docs page to find out what config to what camera (rear or front) corresponds to.
Here's a table containing ARKit 5.0 eight tracking configurations:
ARConfiguration
What Camera?
ARWorldTrackingConfiguration
Rear
ARBodyTrackingConfiguration
Rear
AROrientationTrackingConfiguration
Rear
ARImageTrackingConfiguration
Rear
ARFaceTrackingConfiguration
FRONT
ARObjectScanningConfiguration
Rear
ARPositionalTrackingConfiguration
Rear
ARGeoTrackingConfiguration
Rear
Simultaneous World and Face configs
To use driven World Tracking depending on driver Face Tracking use the following code:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
guard ARFaceTrackingConfiguration.isSupported,
ARFaceTrackingConfiguration.supportsWorldTracking
else {
fatalError("We can't do face tracking")
}
let config = ARFaceTrackingConfiguration()
config.isWorldTrackingEnabled = true
sceneView.session.run(config)
}
Or you can use Face Tracking as a secondary configuration:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARWorldTrackingConfiguration()
if ARFaceTrackingConfiguration.isSupported {
config.userFaceTrackingEnabled = true
}
sceneView.session.run(config)
}
Pay attention that both properties are available on iOS 13 and higher.
var userFaceTrackingEnabled: Bool { get set }
var isWorldTrackingEnabled: Bool { get set }
P.S.
At the moment .userFaceTrackingEnabled = true still doesn't work. I tested it in Xcode 13.2.1 and iPad Pro 4th Gen with iPadOS 15.3 installed.
The main issue I am having is with my camera. The main issue is that the camera is zoomed in too far on the iPhone X, Xs, Xs Max, and XR Models.
My camera is a full screen camera which is okay on the smaller iPhones but once I get to the models mentioned above the camera seems to be stuck on the max zoom level. What I really want to accomplish is a nature similar to how instagrams camera works. Where it is full screen on all models up until the iPhone X series and then seemingly respect the edge insets or if it is going to be full screen I want it to not be zoomed in so far the way it is now.
My thought process is to use something like this.
Determine the device. I figure I can use something like Device Guru which can be found here to determine the type of device.
GitHub repo can be found here --> https://github.com/InderKumarRathore/DeviceGuru
Using this tool or a similar tool I should be able to get the screen dimensions for the device. Then I can do some type of math to determine the proper screen size for the camera view.
Assuming DeviceGuru didn't work I would just use something like this to get the width and height of the screen.
// Screen width.
public var screenWidth: CGFloat {
return UIScreen.main.bounds.width
}
// Screen height.
public var screenHeight: CGFloat {
return UIScreen.main.bounds.height
}
This is the block of code I am using to fill the camera. However I want to turn into something that is based on the device size as opposed to just filling it despite the phone
import Foundation
import UIKit
import AVFoundation
class PreviewView: UIView {
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
}
layer.videoGravity = AVLayerVideoGravity.resizeAspectFill
layer.connection?.videoOrientation = .portrait
return layer
}
var session: AVCaptureSession? {
get {
return videoPreviewLayer.session
}
set {
videoPreviewLayer.session = newValue
}
}
// MARK: UIView
override class var layerClass: AnyClass {
return AVCaptureVideoPreviewLayer.self
}
}
I want my camera took look something like this
or this
Not this ( what my current camera looks like )
I have looked at many questions and nobody has a concrete solution so please don't mark it as a duplicate and please don't say it's just an issue with the iPhone X series.
Firstly, you need to update your code with the apt information, since this will give a vague idea to a anyone else with less experience.
Looking at the Images it is clear that the type of camera you are trying to access is quite different than the one you have. With introduction of iPhone 7+ and iPhone X, apple has introduced many different camera devices to the users. All these are accesible through AVCaptureDevice.DeviceType.
So by looking at what you want to achieve, it is clear that you want more field of view within the screen. This is accessible by .builtInWideAngleCamera property of the above given capture device. Changing it to this will solve your problem.
Cheers
I know there a many questions regarding this topic but they are all very old and I can't find a resource explaining how to do it in cocos2d v3.x and in Swift. I have some PNGs in a folder in SpriteBuilder and I have made it a Smart Sprite Sheet but I don't know what to do from there. The other questions' answers made me believe this would work:
hero.setSpriteFrame("image.png")
I have tried it but there is no method called that.
Thanks
SWIFT CODE
var hero = CCSprite.spriteWithImageNamed("hero.png") as CCSprite;
var frame = CCSpriteFrame.frameWithImageNamed("ImageName.png") as CCSpriteFrame
hero.spriteFrame = frame;
Obj.C CODE
#define SPRITE_CACHE ([CCSpriteFrameCache sharedSpriteFrameCache])
carSprite.spriteFrame = [SPRITE_CACHE spriteFrameByName:#"redCar.png"]