Large venue augmented reality - android-camera

What are all the hardware (camera) and software requirements for big screen augmented reality? Is there any workaround to use Vuforia and smartphone to display on a large LED screen?

Basically, it will work if the target is big enough and the mobile device camera is good enough. But it will be very difficult for you to get exact answers I think - you need to test it or simulate it.
Regarding if people hide the target - again, you need to figure out yourself what the scenario is and how bad can it be. If it's very crowded and people are hiding the target constantly, using only target-based solution is not enough. If you're only talking about hiding here and there, you can use Vuforia's Extended Tracking for that.

Related

Lighting in Unity2D looking different on phone than in editor

I am working on a little mobile game in Unity in 2D. I want the player to be a light source and everything else to be dark, so I gave "everything else" the sprites/diffuse material. This works extremely well in the editor/Game view in Unity, but when I build to my Android phone it looks weird. See the pictures. Any ideas?
Currently I am using the realtime rendering mode for the light, I know it is not efficient, but that's a problem for later. I looked into baked lighting, but I spawn the rooms random and dynamically so I am not sure how to proceed there.
I can find very little information about lighting in 2D mobile games in Unity, not sure how to proceed, it is all very confusing.
How can I make the lighting look the same on the phone as in the editor/game view in Unity?
Sry for writing an answer instead of a comment, but I'm new to SO and don't have the rep to write comments, yet.
I had similar issues on my Android build and we would need way more information in order to help you.
(1) Are you building for Android, iOS or both?
(2) Check your graphics emulation. Make sure you use OpenGL3.
(3) Check your graphics tier upon build. Try to use the highest tier with high settings. Lower tier settings might result in bad light.
(4) Not necessarily corresponding to your issue but you might wanna try building with a 32-bit display buffer and see what happens.
(5) Are you using post processing effects? There is very limited support on mobile devices.
(6) Check the priority of each light source you are using.
(7) Maybe it's a shader issue? Have you tried using mobile shaders?
Hope this helps, again, sry for answer while not being 100% sure.
Cheers.

ARKit with multiple users

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

Make iphone app ipad compatible

I have an iPhone/iPod app that I hired a contractor to make. Now I am asking same contractor to support iPad, and the contractor is quoting a ridiculously high price (the BD guy is). I think they know that since they have developed the app, they have some leverage and want to maximize their profit.
Some questions:
Is adding support for iPad mostly a UI job?
Is any coding needed except detecting device type?
Looking at their images/ folder, I can see that for every graphic, they have already made a "2x" version which is double in size. Could it be that they have already created the necessary artwork, as I have told them from the start that iPad support will likely follow the iPhone version?
If I were to use a different contractor now, as it is likely we will not come to a middle ground since we are so far apart in price, what are the things a different contractor would need to do the port?
In particular, I'm wondering if I need to fight to get the raw Photoshop files which contain the graphics, so they can be recreated for iPad, or will going by the eye be good enough? I personally don't mind if the artwork is slightly different.
This certainly makes me think twice about using contractors in the future.
Well here are some answer from my experience:
Yes mostly it just about changing the look of your app. But people are expecting a different user experience on the iPad, so not all view should be full screen for instance.
No most iPhone code will run fine on the iPad, if you are using stuff like UIImagePickerViewController then you need to change the way it is displayed.
NO the #x2 are for retina device NOT for iPad.
Source code and design would do I for me.
Having the original PSD would be nice, but you can do with out.
Just keep in mind that you just can scale up most applications and expect them to become fully excepted by users.
This really depends on the app but there are some differences for iphone and ipad.
Yes, it is mostly an UI job, and depending on screen content, porting one screen can be trivial (just checking if the autoresize functions do their job right), or though - making one from scratch. If your application has lots of complicated screens, I get why the price may be high.
Also - there are some differences in what controllers are available on each device, mostly the popovers or action sheets - that may require different code for each device.
As for the graphics - the 2x resources are actually for the retina capable devices (4th and 5th gen) - most people use them for the iPad too, but as the screen dimensions are not exactly the same, they get warped slightly. In most cases thats ok, but for really high quality, a separate set of graphics may be required.
Take these as generic answers, the complexity of the actual app may affect these answers quite a bit;
1) If the app isn't using any specific functionality on the iPhone that isn't always available on the iPad (GPS for example, or specific camera resolutions for image processing), then yes, it's mostly a UI job. That doesn't mean it's necessarily quick and easy, you may want to change the layout radically for the iPad (that, of course, is up to you though)
2) Most code except UI possibly related code mentioned above should not need much change. Exceptions if any are mostly related to different hardware on different models and depends on the complexity of the application.
3) 2x images are not for iPad, they're for the retina display on iPhone4 and later.
4) Almost impossible to answer without seeing the code or even the app, sorry. If it's a fairly simple application, everything needed should be contained in the XCode project.
5) Up to you, if you want a quick "fix" you may want to resize the 2x images from retina resolution to iPad resolution in Photoshop and use anti aliasing to make them look ok. Your judgement call though. Just check that your deal with the contractor does not give him all the rights to the artwork or you may get into trouble changing/reusing it.
It is. You'll require separate nibs for iPad UI, if you don't want different UI logic, so it's possible to use same view controllers.
View controllers will require logic branches if UI is different. It's mostly checks for user interface idiom though.
#2x versions are for retina display. They will be useful when iPad 3 with retina hits the shelves. Right now, low-res images will be enough for iPad UI.
Different contractor will require the complete code of your project along with all resources...
...so yes, get all the PSDs as well.
First off, I have well over a decade as a professional software engineer working for many clients both small and blue-chip, with broad experience of a variety languages/devices. With that said:
Please remember that the ipad version will need testing on ipad 1, ipad 2 and in a couple of weeks time on an ipad3. Testing takes time. The new version will also need to be tested again on all iphones too.
Also, you mention that this app is a game. The original code might have been coded in such a way as assuming certain screen resolution, and maybe even have hard-coded values throughout the code relating to screen positions etc. Particularly if the coder was not aware of a future ipad requirement. Also supporting ipad 3 might not be an insignificant task if it has x2 graphics depending upon original code and the game engine used (if there is one).
Some apps will cost the same to create an ipad version as the original iphone app.
If your original agreement didn't include IPR over the source you might have difficulty getting it. Some agencies and contractors default to providing source to clients, others charge extra for provision of the source.
Lastly, the contractor might have originally coded the iphone app at a loss, i.e. they might have quoted you and been paid for 3 days work when they actually spent 10 days on it. In which case they might be assuming the worst for the ipad version too.
There are a lot of questions to ask and be answered before you can say they are "trying to rob".

What should I consider to ensure seamless port of my iPhone apps to iPad?

Following iPad's announcement and its SDK (iPhone SDK 3.2), porting apps to iPad becomes an important issue. What guidelines I should follow in my iPhone apps to ensure I can port it to iPad as seamlessly as possible?
The different resolution is particularly an important issue. While the iPad runs iPhone apps unmodified, it's not really the desirable behavior for a native app. How can we make our iPhone apps resolution-independent so that they can run gracefully on all resolutions like most desktop apps?
If you've been using IB and setting the resize behaviors of elements properly, and also coding frame coordinates all relative to each other you are half-way to having a UI that can potentially scale to a larger screen.
From the screen shots there are new kinds of action-sheets as well, potentially attached to UI elements instead of floating - if you use overlays today they will probably work about the same but you may want to consider changing placement from the center on larger display.
UPDATE:
Now the event is over, and registered developers can download the SDK - although we cannot talk about specific features here just yet, read through ALL of the documents related to the new OS version as there are a number of things aimed at helping you transition to supporting both platforms. Also before you start using custom libraries for things take a look through the API changes to see what new abilities might be supported that are not today.
Generally speaking, what I said above about IB holds true, and also you should start thinking about how your apps today could use more space to present more information at once instead of being split out over multiple screens. Also if you are doing any projects right now that use images, make sure to initially design the images large enough that you can also use them for higher resolution tablet applications.
It is far more reasonable to expect users to input text (and larger amounts of it) than with a non-iPad device.
Nothing, it appears. Although we don't have the SDK quite yet. It will all existing run iPhone app without an issue, albeit at reduced resolution.
It remains to be seen how much of the existing iPhone SDK is shared with the iPad SDK UI wise.
Judging by what has been said, absolutely nothing. You will have to adapt to the new screen size and better hardware all together, if you want to take advantage of the features that the improved device offers. The lack of a 3g module is also something to consider if your app(s) rely on that functionality.

which features do you look forward to the most in iPhone SDK 3?

Which of the new features are you looking forward to the most in iPhone SDK 3.0?
Is it one of the main advertised six new things, or something smaller? Something in the "1,000 new APIs", perhaps?
Phone to phone communication via bluetooth seems like it will terribly useful for some apps I am writing. No longer do you have to input all the data you want to store yourself, you can share some of it with other iPhone users.
not really a feature, but the best thing about developing the iPhone SDK further is the great frameworks that arise. there are some really, really great frameworks out there already (like the Three20 project) which will become even better with the new 3.0 SDK.
my real excitement will take over once they let us run background processes. maybe in 4.0?
Video! The ability to write decent tools for mobile video uploads is a big draw.
MapKit by far will bring the biggest change sweeping across the app space.
My personal favorite is that we can finally easily track upload progress of large files (like images).
I really, really want to see fixes in the camera API so that it isn't either broken (2.2.1) or forcing a switch to portrait (3.0).
Apart from that, the most useful features to me are:
push notifications. Great for making an app more sticky - you can let the user know that something of interest to them has happened.
CoreData - I've been using a third-party SQL layer, but it's a little buggy and no longer supported.
Peer-peer bluetooth, as the poster above said, is also useful for local data exchange.
And the least useful? Cut and paste. I actually want to disable it in my app (to discourage people from copying content) - and it doesn't look as though you can (yet).
Bluetooth phone-to-phone communication with GameKit will enable a host of currently impossible applications. Multiplayer games with no WiFi network needed and data exchange between two phones are obvious use-cases.
I'd also like to see - not currently included in the betas - a decent camera API that allowed us to customize the appearance of the capture screen, and as another poster said, have it work properly in landscape and portrait mode.