Which formats can I use for animated 3D objects - iphone

.. in order to use in augmented reality application on iphone?
I have an app working like that; camera detects the marker and then it places an object related to that marker. However the object is not animated. It stands there. Of course i can move the object programmatically but i don't want to do that. What i want is the object has animation itself. I searched but i can't find exact file format. There are .obj files (not animated itself, is it true?), .mtl files, .anm files .etc. If the format is one of them, then can you give me an example model?
Thanks

Definitely MD2 is your best choice. I have integrated jPCT-AE and Vuforia and it works like a charm. AFAIK MD2 is best for animated models mainly because it can store the key-frame animation in itself and you can have any kind of animation with it at almost no cost.
Here is a test video:
http://youtu.be/chsHh0pEhzw
If you have more question do not hesitate ;)

You should specify what AR SDK/platform you are using to create the iPhone application you are talking about. That being said, for many of the common AR SDK available the MD2 format is often used to display animated models (either with a built in render engine or with example code that shows how to use the MD2 format with that SDK):
http://en.wikipedia.org/wiki/MD2_(file_format)
http://www.junaio.com/develop/docs/documenation/general/3dmodels/
OBJ (Wavefront) files do not support animation.

Related

Create custom AR Quick Look experience without using the device's camera

I'm wondering if there's a way to recreate the "Object" experience when viewing a .usdz file through Apple's AR Quick Look. I want an experience that showcases a 3D object without "augmenting reality".
Some options that I'm thinking of that might be able to recreate this feature:
1) Using ARKit, disabling the camera and setting my own background with a custom image. I would then set the usdz/object in the center of the device's screen while having all the interaction functionalityfor the 3D object.
2) Web AR - recreate this 3D experience elsewhere and showcase this on a webview.
Any guidance or discussion about this is much appreciated - thank you!
You can use Google's model-viewer if you are going with the web solution. Another easy and effective solution would be echoAR (full disclosure, this is where I work). You can simply upload your models there and then get a link to thier model-view. You can upload models in different formats (obj, fbx, glTF, glb, USDZ) and it'll automatically convert it to the format you need to view on any device.

How do I create a test app to create many screenshots in UE4?

I'd like to create a test application for my Unreal Engine based game to create screenshots. I'd like to place many (possibly thousands) of cameras throughout the maps and then have my test application enumerate them all and take a screen capture at each camera location.
I came across Taking Screenshots, but wanted to first check to see if this isn't already built into UE4 in the editor, or some tool. I'm also aware of the Screenshot Comparison Tool, but that doesn't seem to be what I need because I don't really want to use UE4 to do the image matching, but instead just want a directory full of images that I can do with what I want.
Any suggestions?
This is not directly what you want to do but I found this article very interesting: https://www.unrealengine.com/en-US/blog/capturing-stereoscopic-360-screenshots-videos-movies-unreal-engine-4?sessionInvalidated=true
It explains how people at Ninja Theory Ltd proceeded to produce their 360 video trailer which is, in the end, producing two 360 screenshots per frame.
So what they did was having everything exported in a folder (as a sequence of images) and then did what they wanted with it. (In this case put them all together with ffmpeg to make a video)
They used a plugin, I do not know if it can be tweaked not to make 360 captures but the built-in "take screenshot" from UE4 could work for you.
More specifically to what you need, you could probably store all positions/transform in an array, loop over it when you want to make the screenshot. Each step, you place your Camera at the specific position, make sure it is the current active camera to change the "view" and take a screenshot.
Taking screenshots and setting parameters such as export folder, resolution etc. ... can be called via console commands and console commands can be executed from code or blueprint using the "execute console command" node (there is an example in the article).
I hope it helps.
I think the best bet you have is rendering camera to a texture.
this way you can have multiple inactive camera then iterate through them, activate them, capture their screen view and going to the next one.
for basic tutorial have a look at
https://www.youtube.com/watch?v=a9iho861SlY

Using 3D objects in an iOS app

How would I go about adding a 3D object from Maya into an iOS app? For now, before it gets too complicated, I just want to add it in, no response to touch yet. Is there a tutorial about this? Thanks!
Edit: It doesn't necessarily have to be Maya. I can learn how to use another program.
Molecules is a great open source iOS app that uses OpenGL ES to render 3D objects that have nice touch interaction. Maybe that would be a good starting point.
Go to Xcode and start an OpenGL-ES Project.
Export your maya-model to *.obj(Wavefront)-format and use Jeff Lamarches obj-loader to create c-header files that you can include into your iPhone project!
For a more modern example of rendering 3D meshes, check out this library. It comes with an OBJ parser, so if you can export your geometry as an OBJ file, you can render and interact with it using the library on iOS. Here's a short video of the library in action.

iOS: Getting started with the Camera and Custom Image Editing

My app will let users cut out things from photos. They'll be able to either select a photo already in their iPhone's photo library, or take a new one with the camera. From what I understand, UIImagePicker is the simplest way to accomplish picking a photo from the library or taking a new one. However, I also understand that it only provides basic image editing (zoom, crop). I want my image editing to allow for the creation of Bezier curves that, once all joined together, will cut out the enclosed area, saving it without the surrounding background.
The official apple documentation on UIImagePicker suggested that the AV Framework is required for providing custom image editing as opposed to the basic zoom and crop. So my first questions are:
Is the AV Framework indeed what I want to
use?
Will it get used in conjunction with UIImagePicker (i.e., UIImagePicker is used to select the photo or take a new one, and then my AV Framework code takes over for the image editing)?
Can anyone offer good resources on getting started on learning the code for this process?
My final question is about the actual Bezier curve generation/manipulation. It appears that the Core Graphics Framework has support for this, but there is also the UIBezierPath object, which is apparently some kind of wrapper for the Core Graphics tools I would otherwise use.
So my final question: will I want to use the UIBezierPath object, or does what I previously described require more fine-grained control that UIBezierPath can't provide, thereby forcing me to use the Core Graphics framework directly?
Thanks!
the AV Foundation allows you to talk to the camera, to configure it in various ways, and to receive a live feed from it. So it's good for taking new pictures or movies, but not for selecting them from the camera roll or for editing them. You'd likely want to use the AV Foundation to replace the image capture duties that UIImagePicker supplies. Probably you'll want to use a UIImagePicker with allowsEditing set to NO so as to be able to provide your own entirely separate editing interface.
no, it's a different sort of task.
I'm unaware of any tutorials on this sort of thing, but the docs are pretty good. I've posted the whole stuff for capturing a live feed from the camera in answers like this one, not sure if that's a more helpful way to see how some of the AV Foundation classes can be chained together?
What you'll probably end up doing in order to edit an image is starting with a UIImage, creating a CoreGraphics bitmap context (which is something you can draw to), doing some sort of compositing to that and then converting the result into an image and saving it back out to the camera roll.
UIBezierPath is a wrapper over the Core Graphics stuff, but will probably do what you want. addClip can set a defined path to be the new clipping path on the current context, or you can use the CGPath property if you need to go a bit further afield than UIKit's idea of a current context.
look for the iphone cookbook, maybe kickasstorrents still has it
C07 has everything you need, camera, overlay, loading, picking, editing, snapping,hiking camera, saving doc, sending image, image scroller, thumbnails, masking, etc....

Simulate text writing using Cocos2d on iOS

I am looking for suggestions on how to simulate the writing of dynamic text using Cocos2d on iOS.
The effect should look as though the text is being written by an actual pen in real time.
My main concern is the best way to convert the text into a path that I can move the pen along.
I really don't want to create my own paths manually. It would be great if I could somehow generate a path from a Cocos2d sprite.
I think the best way will be store the path as an array of points. It is really simple to write a small program that will load a font characters image and will be responsible for touch. In touch handler just store the touch position in an xml file. And also store the first touch point as an origin of a character. So it will be an easy way to generate paths.
The last time I had to do that. I resigned myself to create a video with After Effect... (here is a good tuto to do that)
Play it with MPMoviePlayerController and replace the view at the end of the video by your sprite.
The bad thing about this methods is you can't do that on iOS earlier than 3.2 (MPMoviePlayerController can only be in fullscreen mode)
I don't think it's the answer you are looking for but I've spent some time before did that
Anyway, good luck