How to check if I took an iPhone photo myself? - iphone

When I import the pictures from my iPhone, they all end up in one huge file list: IMG_4649.JPG, IMG_4650.JPG, IMG_4651.PNG, … There are images I've taken myself, images from friends, images I've downloaded and screenshots I've taken.
I'm looking for a way to programmatically find out whether I've taken a picture myself or not. If the iPhone would store the serial number in the EXIF data I could use that but it doesn't.
Is there any other device specific information stored in the photos? The model alone doesn't help me as others take pictures with the same model.

Short answer: No. As of this writing, the iPhone does not store its serial number in its EXIF data.
However, a probabilistic model might work well enough for your purposes.
I would filter out
other models (not all your friends and family have the same phone as you)
impossible GPS values (from places you've never been, or if you want to get fancy, places you weren't on certain days—just make a table of your locations and correlate with the GPS data in EXIF)
Then, I'd guess that your pictures probably have an aesthetic signature. There's an interesting paper on automatically assigning aesthetic scores to photographs. You could filter out photos that don't fall within an expected aesthetic range.
The end result won't be perfect, but will be great motivation to develop a unique photographic style!

Related

How can I record the internal audio of my musical app? (NOT from microphone!)

I've published a MIDI-based app that generates sounds. I'd like to implement a REC button, to record and save/share the user's musical creations. I can't manage to find a solution to do that. I've found a lot of tutorials about how to record sounds from microphone or other external sources, but I care about the internal audio.
I'm using AVFoundation with an AVAudioEngine to which I've attached and connected a bunch of AVAudioUnitSampler with a DLS Soundbank loaded.
The app works great and it's already downloadable on the store, but recording is an important missing feature. Any help would be really appreciated. Thank you.
Recently I have worked on an app that allow the user to export the sound and use it out of app. If the user can create sound freely, you should have an internal representation of that sound, that can be manipulated in memory. And the UI just presents that internal representation to the user.
It's the same case when you have an array of animals, for example, and present that array as a table view to the user.
There's another problem, if you record the sound, the sound must be played completely to be exported. Is it really necessary? Just rendering a 2 ~ 3 minutes sounds in 2 ~ 4 seconds sounds more reasonable.
So, you really must record the sound or it's just missing some in memory representation of it?
I think this is what Soundflower does. After installing it, in your app you set Soundflower(2) or Soundflower(64) as your output device (in the device list you'll also find Headphones and USB-out), then in your recording app you select Soundflower(2) or (64) as your input. I think my version of soundflower was installed by a Korg app. The (2) and (64) are the number of channels. (2) for stereo, and (64) for any number of channels out, up to 64 channels.

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

How to add a tag overlay to a photo in iOS aka Facebook

I was wondering if anyone had an idea as to how the people tagging feature works on facebooks iPhone app i.e. in the app you can touch the photo and then associate that touch-point with a facebook friend. Specifically I was wondering whether this is just as simple as associating co-ordinates on the image with a data object (facebook friend in this case) using the iPhone or whether they are doing some smarter image recognition in the background to workout what other areas of the photo also may belong to that person i.e. is does the tag extend beyond the point touched on the screen. If the latter is the case is anyone familiar with the techniques used?
Thanks in advance
Dave
I don't think they are using face recognition algorithms on the iphone, since that is processor consuming specially if you have hundreds of friends. If you want to do a face recognition and you have faces of people that you want to search in you should do it on the server, so after you take or import a photo, you should send it to your server where you search for the face and return a JSON with points for faces and data for matched users. Then do your UI to present it on the screen for the user.
Edit
If you want to use face recogingitioning on iphone try this: Face recoginition iOS

Android/iPhone Image parsing

I wanted to write an Android and/or an iPhone app that entails taking a picture of something (right now, I just want to limit to text) after which the app parses the text to make use of it. For example, perhaps taking picture of a sentence (or may be just fragments) will be then parsed by the app to bring up more information about the book. Title, author, ISBN etc. And even may be information about other books that are similar in content to this book.
Is this possible to do something like this? Is there an API that exists already that parses the content of an image? How is an image stored in Android and iPhone? Is it possible to implement the app in one platform and not the other?
I'd appreciate any input or advice that you guys have to offer. Thank you!
You're looking for this, possibly.
It's called OCR, or Optical Character Recognition.
Also check out ZXing a great library for decoding one- and two-dimensional barcodes. There are both iPhone and Android versions.

iPhone streaming debugging information

I'm looking for a way (doesn't need to be app-store save!!) to get ahold of video-streaming-relevant debugging information.
What I'm trying to do, is to write an application that opens a video stream and displays information like:
framerate
bitrate audio / video
etc etc.
codec information
basically i want to display as much information for any given stream.
Thanks for any information in advance,
best regards
sam
Even though you tagged your question with MPMoviePlayerController, that class probably isn't going to help you out very much. First of all, there's a limited amount of information you can access from it at a high level, certainly nothing about codecs and audio bitrate. And even if the class does store this type of information somehow, your app would be disqualified from being in the iTunes App Store if you access non-public methods or properties.
Secondly, MPMoviePlayerController only supports a limited number of codecs itself, namely the ones that can be decoded in hardware on the iPhone/iPad (H.264 baseline and MPEG-4 videos).
Anyways, a good option could be FFMpeg for the iPhone. Getting the information you need seems to be much more straightforward this way; check out this blog post for a nice tutorial for using the libraries.
I'm not sure about the potential legal issues concerning distributing such a program in the App Store, but if you statically link it with your binary that would at least satisfy Apple... you'll have to check the FFMpeg Legal Site for their end.