How to "hang on" to two distinct points coming from iPhone's camera input in live stream? - iphone

how could I get "hold of" two points coming from the iPhone's camera (in live stream) like these people do it : http://www.robots.ox.ac.uk/~gk/youtube.html (they're using this technique to bypass the need for markers in AR..)
I'm not interested in AR, I'm only interested in coming up with a way to "hang on" to such points coming from the camera's live stream and not lose them regardless of whether I'm moving the camera closer to them or further away, to the left or right,..etc.
Is it just a matter of coming up with a code that scans the camera's input for something that is "standing out" (because of diferrence in color?high contrast, etc?)
Thank you for any helpful ideas or starting points!

Check out http://opencv.willowgarage.com/wiki/
OpenCV is an open source lib that can do lots of things around image recognition and tracking.
If you google for it, along with iOS as a keyword you should run into a few related projects that might help you further.

Related

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Monitor movement on a real world track controls Unity Camera

I am working, or about to be working, on a project where I would like to move a monitor along a railed track, left to right and vice versa, and have the camera in Unity update its movement left or right to follow the track movement. I am attempting to do something like this Video Example but on a smaller scale, and have, at certain points along the track, triggers that will spawn different animations based on where on the track the monitor is, sort of like a timeline.
I guess my question is, does anyone have any thoughts or ideas on how this is accomplished? My initial thought is using a accelerometer to get movement data and use that with some scripting to control the camera, but I am not sure how big a factor the potential error may be with this approach. Another idea would be to use some sort of Laser Range Finder to get a constant distance strapped to the back of the monitor or the track, right now I am really just brainstorming.
On the unity side of things I will be fine, I am just wondering if there are other ways, or if people may have other ideas on how this is done, before I jump into spending money on trial and error modes. Thanks for any insight that anyone may be able to offer in advanced!

iOS: Get how fast user is moving

I'm wanting to figure out if a user is not moving at all, walking, or running using the iPhone. I'm not trying to implement a pedometer. I just want to know around about if someone is moving briskly, slowly, or not at all. I don't need mph or anything like that.
I think the accelerometer may be able to do this for me, but I was wondering if someone knows of any tutorials or example code that might be able to point me in the right direction?
Thanks to all that reply
The accelerometer won't do you any good here - it will only capture changes in velocity.
Just track the current location periodically and calculate the speed.
There are no hard thresholds for walking vs. running motion, so you will have to experiment a bit. The AccelerometerGraph sample code should get you started on how to get and interpret accelerometer data.
The Accelerometer is good, but if the user has an iPhone 4 or iPad 2 you should use the gyroscope.
CMMotionManager and Event Handeling Guide - Motion Events
Apple Documentation is the best example you can get!
People have a different bounce in their step between walking and running which can be measured with the accelerometer, but this differs between individuals (what shoes they are wearing, what surface they are upon, what part of the body is attached to the iPhone etc.), and this motion can probably be imitated by shaking the iPhone just right while standing still.
Experiment by recording the two types of acceleration profiles, and then use some sort of pattern matching to pick the most likely profile candidate from the current recorded acceleration data.

how would I use iphone motion detection for an egg shaking-like application?

I am hoping to build an application similar to those egg shaking applications, to better understand how to detect motion on the iphone. I've been looking at accelerometer methods and motion and motion methods, but can't seem to get working what I want to do.
The specifics of my need are as follows: I want to be able to play one sound when user shakes the phone away from them, and play another sound when they shake back towards them. The motion from the user would be very similar to an egg shaker, with two different sounds able to be played depending on whether they moved the device towards or away from their chest. It would also be good to measure the intensity with which they moved away or towards.
Any ideas?
I've searched apple's sample code for a similar application, but there doesnt seem to be one.
Look at this game which is open source and makes great use of the accelerometer. It's a good one to be able to tell which direction you are going, but I haven't messed around much with intensity. I'm sure it's easy enough once you get into the details.
http://github.com/haqu/tweejump

How to detect height of iPhone (for use in augmented reality game)?

I'm working on locating an iPhone device in 3D space.
I can use lat/long to detect physical location, I can use the magnetometer to figure out the direction they're facing, and I might be able to use the accelerometer to figure out how their device is oriented, but I can't figure out a way to get height of the device off the floor.
Specifically, I need to know if the user is squatting down, or raising their hand toward the ceiling (a different of about 2 meters/6 feet).
I posted a more detailed description of what I'm trying to do on my blog: http://pushplay.net/blog_detail.php?id=36
I would love any suggestions as to how to even fake this sort of info. I really want the sort of interactivity and movement that would require ducking and bobbing, versus just letting someone sit back and angle the phone -- kind of the way people can "cheat" playing with a Wii...
The closest I could see you getting to what you're looking for is using the accelerometer/magnetometer as an inertial tracker. You'd have to calibrate the user's initial position on startup to a "base" position, then continuously sample the sensors on a background thread to build a movement model. This post talks about boosting the default sample rate of the accelerometer functions so that you can get a pretty fine-grained picture of the user's movements.
I'm not sure this will solve your concern about people simply angling the device to produce the desired action, but you will have to strike a balance between being too strict in interpreting movements and allowing for differences in movement
The CoreLocation stuff gives you elevation aswell as lat/long, so you could potentially use that although there are some significant problems with this:
Won't work well indoors (not a problem for Sat Nav, is a problem for games)
Your users would have to "calibrate" (probably by placing the phone on the floor) each location they use!
In fact, you'd need to start keeping a list of "previously calibrated locations"... which could vary hugely just in one house (eg multiple rooms and floors). Could get in the way of the game.
Can't be used on moving transport (tranes, planes, automobiles... even walking) because elevation changing so frequently.
Therefore I'd have thought that using the accelerometer as a proxy for height is a substantially more preferable route than determining absolute elevation.
I am not intimately familiar with the iphone. But it might require a hardware add-on. (which you probably don't want to do). After thinking on this the only way I know how is through light or more specific laser. You shoot out a laser on the floor and record the time it takes to get back. It's actually not a lot to put this hardware together and I am sure the iphone has connections for peripherals. Unless osmeone can trump me, I say ther eis no way to do that with an image.