Is there ObjectTracking in RealityKit or SceneKit? - swift

I tried out object detection in SceneKit but the object is only detected in the beginning once virtual objects are placed the object is no longer detected or tracked. I remember reading that object detection is not a continuous process but can not find that resource anymore. Please share resources that might be useful. Thank you.

Update:
ARKit 4 introduces two new types that conform to ARTrackable (ARAppClipCodeAnchor, and ARGeoAnchor), but no changes in ARObjectAnchor.
Also, while I was researching a little more about the subject, I found that Vision framework provides object tracking. This might be useful in some situations.
First, I will try to clarify some concepts.
SceneKit is a rendering framework, so its work is to render virtual objects on the screen.
RealityKit is also a rendering framework. This one was created by Apple to help developers in creating better AR experiences.
That said, rendering frameworks don't actually track anything in the real environment, like physical objects or images. The responsible for understanding the real world is ARKit. ARKit can detect real objects, images, etc., in the physical world.
Going back to your question, in order to be tracked, the physical object should have a correspondent anchor (ARAnchor) in your app that conforms to ARTrackable. As you can see in the documentation, the conforming types of ARTrackable are ARBodyAnchor, ARFaceAnchor, and ARImageAnchor. Object anchors are of the type ARObjectAnchor, and, as you can see in the docs, this class doesn't conform to ARTrackable, meaning that physical objects can't be tracked on ARKit 3.0. ARKit only detects the object, but doesn't track it. In the future, object tracking might be supported.

Related

What is a good game for showing off OOP in Unity?

For my exam project in coding class I have to recreate a game in Unity where I can also show off my OOP knowledge. Some of the requirements of the project are encapsylation, inheritance and polymorphism.
What are some good ideas for a game that can show all these things? The ideas I had was Candy Crush, Street Fighter, Hearthstone or some tower defense, though I'm not sure if I'll be able to make good use of OOP in these games.
All games you're listing aren't really fit for an exam project. I don't know how much time you have to finish this project, but I'd go with something a lot easier, especially if you're going to do it youself. If the main objective is to use your OOP knowledge, any simple game with maybe different kind of enemies or items will do.
For instance, asteorid, but with other kind of objects and weapons beside the asteroids and the main gun themeselves. Asteorids, bombs, fire balls, you name them. All of them must have some degree of common behavior and that's where OOP can be used. Weapons too. Rayguns, double barrels, radial shots, shockwaves...
This is just an idea. You can do something different like Pac-Man and implement diver kinds of ghosts, for instance. Or different main characters with different abilities.
Although, be aware that Unity (and most other game engines) although built on topo of OO languages prefer a Component-driven approach and true OOP is used in very little quantities in very specific parts of the game. Usually data structures more than game objects.
I had a client ask me to build a Tamagotchi type of game and characters are a place where you can use a lot of inheritance to build some base character types then branch off into specific character types. The classic base "animal" and then "bear"/"wolf" etc or biped/quadruped etc some lay eggs, others are mammals you could come up with some good inheritance trees.
In Unity, the different types would need to implement their different ways of triggering animations via the Animator controllers possibly. Most of the systems in Unity won't need any inheritance though, your plain C# classes would implement these and I don't recommend that you build off of Monobehavior. Instead, you have a ScriptableObject and it will instantiance the necessary C# class possibly based on an enum you have that represents all the different types.
Basically it was a 3D Unity Project (nothing used 2D components) and had a simple plane as the ground, and then the camera was above - pointing down at the "ground". The character had 2D art that was perpendicular to the overhead camera, but they were correctly upright with the normal Z world direction so their "feet" were on the ground plane and normal physics/rigidbody can be turned on. This way you can add a Navmesh to the ground and Nav Agents to your characters making movement really simple. They move around like normal, but their art is facing upwards towards the camera. You can give them colliders and rigidbodies. Setting this up took barely any time and saves you from having to work in pure 2D mode with all the problems it comes with for implementing your own movement and control systems, collision detection, etc. If you go pure 2D you lose a lot of systems and it's not necessary. This way you can just use simple 2D art assets.
Your tower defense is also not a bad idea because towers can have inheritance too. Again, design your own classes and you shouldn't be inheriting from Monobehavior in your base class. You aren't expanding on Monobehavior, you are building your own classes.

Apple Vision image recognition

As many other developers, I have plunged myself into Apple's new ARKit technology. It's great.
For a specific project however, I would like to be able to recognise (real-life) images in the scene, to either project something on it (just like Vuforia does with its target images), or to use it to trigger an event in my application.
In my research on how to accomplish this, I stumbled upon the Vision and CoreML frameworks by Apple. This seems promising, although I have not yet been able to wrap my head around it.
As I understand it, I should be able to do exactly what I want by finding rectangles using the Vision framework and feeding those into a CoreML model that simply compares it to the target images that I predefined within the model. It should then be able to spit out which target image it found.
Although this sounds good in my head, I have not yet found a way to do this. How would I go about creating a model like that, and is it even possible at all?
I found this project on Github some weeks ago:
AR Kit Rectangle Detection
I think that is exactly what you are looking for...
As of ARKit 1.5 (coming with IOS 11.3 in the spring of 2018), a feature seems to be implemented directly on top of ARKit that solves this problem.
ARKit will fully support image recognition.
Upon recognition of an image, the 3d coordinates can be retrieved as an anchor, and therefore content can be placed onto them.
Vision's ability to detect images was implemented in ARKit beginning from iOS 11.3+, so since then ARKit has ARImageAnchor subclass that extends ARAnchor parent class and conforms to ARTrackable protocol.
// Classes hierarchy and Protocol conformance...
ObjectiveC.NSObject: NSObjectProtocol
↳ ARKit.ARAnchor: ARAnchorCopying
↳ ARKit.ARImageAnchor: ARTrackable
ARWorldTrackingConfiguration class has a detectionImages instance property that is actually a set of images that ARKit attempts to detect in the user's environment.
open var detectionImages: Set<ARReferenceImage>!
And ARImageTrackingConfiguration class has a trackingImages instance property that is a set as well, it serves the same purpose – ARKit attempts to detect and track it in the user's environment.
open var trackingImages: Set<ARReferenceImage>
So, having a right configuration and an ability to automatically get ARImageAnchor in a ARSession, you can tether any geometry to that anchor.
P.S. If you want to know how to implement image detection feature in your ARKit app please look at this post.

ARKit – 3D objects are not static

My ARKit application places certain 3D objects in space, but only some of them remain stationary and the rest follow me around as I walk.
Does object anchoring depend on the specific object? I download my objects from turbosquid.com and use the OBJ versions of them.
For Objects to remain stationary (relative to you) it is important to attach them as children of the SCNView's root node using something like:
sceneView.scene.rootNode.addChildNode(you3dModelNode)
If you want the nodes to move with you, you can add them as children of the camera node. (Which is not the behavior you are looking for)
Note: To receive a more tailored response, please update the question to include some of the code you have tried.

use sprite kit for an utility app?

I'm planing on making an utility app that could help people create table plans for sitting guests.
The thing is, the only way I see this is by letting users create first their own room, with the correct number of tables and sits, then populate the room with the guest list.
But to do that, I was thinking on using the Sprite Kit technology, because it seems to be an quick and easy way to make 2D schemes, use sprites and rely on coordinates.
Here comes the questions :
Is it possible to make an utility app with a "game" core ? Would Apple allow me to do so ?
If Sprite Kit is not an option, and regarding the fact I'm actually working on xcode 6 + Swift, what would be the alternative to make this process of letting the user create a room that uses X,Y coordinates?
I hope I'm clear, if you have any questions, I'll be happy to complete my requests. Thanks folks !
Nicolas
It depends on the design of your application and what it's requirements in terms of user interface. With SpriteKit, many of the interface elements provided by UIKit are not available and you'd have to build your own if you wanted to use SpriteKit exclusively.
However, there is nothing stopping you from also using UIKit for certain elements such as a UITableView, but it would be a separate view and input would need to be handled via UIKit system. With SpriteKit you don't have a bunch of UI elements to work with.
SpriteKit would mainly be most useful if you need animation and game-like functionality in your utility app.
Apple wouldn't restrict you from using SpriteKit for any kind of app.
In terms of an x,y coordinates, any element you create via SpriteKit or otherwise is going to allow you to position it in that way.
From SpriteKit programming guide, first sentence actually:
"Sprite Kit provides a graphics rendering and animation infrastructure that you can use to animate arbitrary textured images, or sprites."
That is what makes it ideal for games, however not limited to that.

RPG Game loop and class structure (cocos2D for iPhone)

I'm looking to make an RPG with Cocos2D on the iPhone. I've done a fair bit of research, and I really like the model Cocos2D uses for scenes. I can instantiate a scene, set up my characters etc. and it all works really nicely... what I have problems with is structuring a game loop and separating the code from the scenes.
For example, where do I put my code that will maintain the state of the game across multiple scenes? and do I put the code for events that get fired in a scene in that scene's class? or do I have some other class that separates the init code from the logic?
Also, I've read a lot of tutorials that mention changing scenes, but I've read none that talk about updating a scene - taking input from the user and updating the display based on that. Does that happen in the scene object, or in a separate display engine type class.
Thanks in advance!
It sounds like you might do well to read up on the Model-View-Controller pattern. You don't have to adhere slavishly to it (for example, in some contexts it makes sense to allow some overlap between Model and View), but having a good understanding of it will help you to build any program that has lots of graphical objects and logic controlling them, and the need to broadcast state or persist it to disc (game save), etc.
You also have to realize that cocos2d provides a good system for structuring the graphical scene graph and rendering it efficiently, but it doesn't provide a complete infrastructure for programming games. In that sense it's more of a graphics engine than a game engine. If you try to fit your game's architecture into the structure of cocos2d, you might not end up with the most maintainable result. Instead, you should treat cocos2d as what it is: a great tool to take care of your display and animation needs.
You should definitely have an object other than the scenes that maintain the game state, because otherwise where will all the state go when you switch between scenes? And within scenes/levels, you should simply try to use good Object Oriented design to have state distributed over objects of various classes. Each character object remembers its own state etc. Here you can see where MVC becomes useful: when you save the game to disc, you want to remember each character's health level, but probably not which exact frame index the sprite animation was showing. So you need to distinguish between the sprite and the character (model) itself. That said, as I mentioned before, for game objects that don't have a lot of logic attached to them, or which don't need to be saved, it might be ok to just fuse the Model and View together into one class (basically by subclassing CCSprite).
To pull off MVC the way it's supposed to be, you should also learn the basics of Key-Value Observing. (And you'd do well to use this replacement for Apple's interface.) In more intensely real-time games, techniques like this might be too slow, but since you're making a RPG (good choice for starting out) you could probably sacrifice performance for a more maintainable architecture.
The game scene (which is just another cocos2d sprite) plays the role of Controller, in terms of the MVC pattern. It doesn't draw anything itself, but tells everything else to draw itself based on inputs and state. It's tempting to put all kinds of logic and functionality into the game scene, but when you notice that it swells, you should ask yourself how you could separate that functionality into other classes. Analyze which type of functionality you're implementing. Is it to do with data and state (Model)? Or is it about animation and rendering (View)? Or is it about connecting logic with rendering (in which case you should try to make the View observe the Model directly)?
The game scene/Controller is basically a dispatch center, which takes input events (from the user or from sprites reporting that they've hit something, for example) and decides what to do with them: it might tell one or several of the Model objects to update themselves in some way, or it might just trigger an animation in some other sprites, for example.
In a real-time game, you'd have a "tick" or "step" method in the scene which tells all objects to update themselves. This method (the game loop) is the heart of the program and is run every time a new frame is drawn. (In modern game engines there's a lot of multi-threading but let's not think about that.) But in your case, you might want to create a module that can "play the game" completely separate from the game scene. Imagine creating a program that can play chess through the terminal, using only text input. If you create the whole game system in that manner, and then connect it to the graphics engine through a small and clean interface, you'll have a really maintainable app with lots of reusable code for future projects!
Some good rules of thumb: the model (data) shouldn't know anything about sprites or display states; the view (sprites) shouldn't contain any of the game's actual logic (the game rules) but only know how to do simple things like moving and bouncing and reporting to the scene if something complicated happens. Whenever possible, the view should react to changes in the model directly, without the controller having to interfere.