Re-creating iPad fling momentum in AIR (AS3) - iphone

I'm creating an AIR app, but I realized that it doesn't seem to natively support the "fling" momentum. I thought I'd ask if anyone out there has created an object or plugin that would put this back in? Currently, on the objects I want to fling, I'm recreating the momentum, but it's not perfect yet. Could anyone put me onto the right path in doing this?
Thanks!

I ended up creating my own fling, and it works great.
I keep the last 3 touchmove locations and times in memory, then on touchend, I find the difference between the most recent and oldest touch in memory, to find speed and direction. Then I used Actuate to ease it a certain distance past that if necessary.
It took a lot of tweaking to make it feel natural, though.

Greensock has a great plugin which works just like iOS native flick, and can be customized to work in an array of ways. Performance in Air is superb even on older iPads. For best results use BitMask.

Related

Can I track more than 4 images at a time with ARKit?

Out of the box it's pretty clear ARKit doesn't allow for the tracking of more than 4 images at once. (You can "track" more markers than that but only 4 will function at a time). See this question for more details on that.
However, I'm wondering if there is a possible work-around. Something like adding and removing anchors on a timer or getting the position information and then displaying the corresponding models without ARKit, etc. My knowledge of Swift is fairly limited so I haven't had much luck experimenting yet. Any suggestions or pointers in the right direction would be helpful.
By comparison ARCore for Android has a limit of 20 according to the documentation. I've also personally tested web based libraries tracking over 4 markers on an iPhone. I also believe I read somewhere that some variation of the Nintendo DS was able to track more than 4 markers. There is no way this isn't feasible because of hardware limitations.
ARKit 5.0
Today's answer is YES, you can. Apple has told that developers can now detect up to 100 images at a time in ARKit 5.0 announced at WWDC 2021. Let's check this out.
ARKit 4.0
There's no workaround in ARKit 4.0 to simultaneously track more than FOUR images using ARImageAnchor subclass inside session's ARImageTrackingConfiguration(). I should say that this limitation works despite the fact that the total number of tracked images in a scene can be up to 100 in ARKit 4.0.
You can read comments in ARConfiguration class if you choose Jump to Definition option.
I believe this feature was limited by Cupertino software engineers not occasionally. ARImageAnchor subclass inherits from ARAnchor parent class and conforms to ARTrackable protocol, so it tracks not only static images but moving images as well (like a logo on a car's body). Hence, if you track more than 4 images – it's highly CPU/GPU intensive (the most notorious thing for draining phone's battery), cause your device must detect and track several different objects.
I suppose it will be possible to simultaneously track more than 4 images with a newer ARKit 4.0 version that can be run on considerably powerful 5nm devices, like iPhone 12, that we'll see this fall.
Thus, Apple software engineers sacrificed apps functionality for the sake of a robust AR experience.
P.S.
It is incorrect to compare ARCore with ARKit, 'cause these frameworks work differently inside, even though they have similar fundamental principles – like World Tracking, Scene Understanding and Rendering stages. And in addition to the above, I should say that ARCore has more modest functionality than ARKit, which makes ARCore more "lightweight" for CPU calculations (although I understand that the last phrase sounds very subjective).

Screen sizes in Xcode 7 the using SpriteKit

I am relatively new to sprite kit and have been attempting to create my first basic game. All physics and other basics seem ok, but for some reason whenever I build and run the screen dimensions are off (looks like default is 1024×768)?
Pretty sure I'm missing something fundamental here but it doesn't seem immediately obvious on how to adapt the screen to any size iPhone screen (this is my ultimate goal).
My question is whether this is actually just a setting issue or is it necessary to implement code?
Thanks in advance and have a great day!:)
To answer the first part, you can easily change the size of your scene.
If you take the default GameScene, click out of the scene and look at the Attributes Inspector. You will see the default size of 1024,768. Personally if landscape I tend to work with an iPhone 5 design resolution of 568,320.
Regarding multiple devices, SpriteKit works pretty well out the box. You should look at the documentation regarding scaleMode, take a look in the GameViewController.swift. .AspectFit worked really well, nearly pefect across all devices apart from a little letterboxing on iPad. However, for the amount of effort put in, more than good enough.
On a side note, I've found the following iPhone Resolution Guide resource useful in the past.

How to merge the 5 monitors in the XNA (fullscreen it)?

i need to merge 5 monitors in XNA (something like Eyefinity).
I have two graphics cards (HD 5450), which have DP connector, of course,
5x flat monitors with resolution 1024*768.
I need to merge/group this monitors in XNA, because i want fullscreen this over 5 monitors.
(fullscreen over multiple monitors)
I just need the visual studio to detect one graphics device with resolution 5120x768.
How i should modify GraphicsDeviceManager / GraphicsAdapter, make it work ?
I cant use Eyefinity, because i have two graphic cards and that i'm trying do "my own eyefinity" in xna.
In my app, i have 5 models dividing to 5 viewports, which are moved every 1024px.
OR, how i should to make it looking like a fullscreen. I don't want the border being visible and i want to have in the middle of screen - how center it ?
Thanks for answers.
To be honest this is going to be difficult if not impossible to do using XNA. And you'd have to get so far outside of what the XNA framework is providing you that there would be little benefit in the end to even using XNA at that point.
Here's a great thread on the App Hub forums talking about different ways of potentially hacking around the XNA framework to achieve multiple monitor fullscreen using XNA.
http://forums.create.msdn.com/forums/p/5562/571993.aspx
As you can see, no one really had any great suggestions and by the time you were dong you were basically programming at such a low level that you might as well be doing C++ and DirectX. Which is exactly what I would recommend to you.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb206364(v=vs.85).aspx
Using DirectX you can see that you're going to get a game/application running fullscreen with a multiple monitor setup much faster and without having to hack your way into it.

Which has better performance, symbols in the library or exported images of those symbols?

I recently chose to switch to a method where my library is totally empty and I embed every image/animation/sound that I need via the Embed tag because it would make my life easier. Having a lot of symbols in my library causes CS5 to run extremely slowly and it was really annoying me.
This was totally fine for making games for computers but I'm currently working on my first iPhone game and I'm noticing that the game starts lagging after a few seconds of play. It's not even a complex game but it does have a lot of images with transparency. So I'm wondering if it would run faster if I forget about the empty library method. I don't really know how that would affect performance but this is the first time that I have to worry about performance. But I am aware of other things that affect performance like transparency and object pooling (just read about this one).
Also the exact same game runs worse on an iPad even though it's a more powerful device?
Having images in your library means that Flash will compress them when it exports the SWF/SWC. This may or may not be desirable.
Using the [Embed] tag means that you can compress them yourself in something like Photoshop and have complete control over the output.
You say that your "game starts lagging after a few seconds of play". This seems to be a memory/design problem rather than whether or not your images are embedded through code or as library symbols. Do some profiling to see where you're spending most of your time, and to check that you don't have a memory leak.
Both using the library and the [Embed] metatags influence performance of your IDE, but they are evaluated at compile time and will produce just about the same byte code.
The performance of your game at runtime is an entirely different issue, though you might be able to get good results by rethinking which images to embed as bitmaps and which to use as vector sprites, how to organize larger images (e.g. creating one basic player sprite and adding individual looks by composition, instead of making each player variation a full sized animation) and trying to reduce the use of transparency and alpha masks.
There are many good articles about improving ActionScript performance both on the AVM2 and on iOS devices. Try searching for "ActionScript optimization runtime" - it should yield plenty of results.

Interface Builder vs Cocos 2D - how choice the best for your app

I was a flash developer for 3 years, and in the last 5 months, i begin the iphone development, i do 2 applications with interface builder for clients, and now i really want to do a little game, is quite simple, one match 3! I made the engine in interface builder, and seens good to me! But after i read some posts, i really want to try it in the cocos2D! So, in 2 days i rewrite all my first engine for cocos2D, very annoying upsidedown coordinates but ok, i really do! But the performance side by side with interface builder version is really scare! Many Many slow downs at the cocos2d side! And the animation seens bugged to me! I really scare! I really don't know what is the best choice for a simple game.
And i want some opinions:
Using cocos2d when need some physics? When we have many objects at screen? What is the performance boost i have with cocos2D?
I have how to share this 2 applications with you guys?! Without your UID?!
cocos2d uses chipmunk, which IMHO is a great physics library if all you need is 2d! Apple doesn't support ad-hoc distribution without knowing the Device ID of the test device!
I think baDa is asking when to use cocos2D and Interface Builder. Since you can develop games using Interface Builder, is it advisable to do so? What are the performance differences between this two approach (cocos2d vs Interface Builder)?
I also had a confusion and ask this to my self when i was just starting to program. After working with several games. I reallized that you can actually combine this two approaches. The big BUT is, when working with objects dynamically (specially with bunch of images) using IB, it totally removes the flexibility of the code. Adding/removing images, adjusting coordinates and replaceing scenes is just too awkward, bulky, and wastefull to implement.
My simpole rule is to use coocs2d for game developemnt and IB for other apps that is reach in interfaces.