export 3d scene to make a stitched image - swift

I am developing an app which creates spherical panoramas. I'm using ARKit for that. I made a button and named it Capture. What I do is that every time the user clicks Capture Button, it takes a snapshot, then it creates a plane using device point of view and uses the snapshot image as diffuse for that plane.
My end goal is to export all those planes stitched into one image to make a spherical panorama. Can anyone guide me in right direction?
I've tried using OpenCV but doesn't work when I take photos of ceilings or the floor. Also, it uses a lot of cpu memory. So far after spending more than a month I'm only able to create a regular panorama with openCV, and that too by stitching images in small batches and then stitching those stitched images to make the final image. Also, it works ok when you place your phone on a tripod. As long as the camera doesn't move much along x y and z axis it works ok.
So I guess the only two options I'm left with are exporting ARKit scene with multiple planes (with photos on them) or using phone's gyro data to stitch images.
I'm guessing that using gyro data to stich images will be extremely complicated in itself. Can anyone point me in the right direction?

Related

Get pixel-count in FoV inside VR-Sphere

Recently i made a application for HTC Vive users to view 360 degree videos. To have a point of reference, lets assume that this video had a resolution of FullHD (1920x1080). See the picture of a 3D model below for illustration:
The field of view of a HTC Vive is 110° vertically and 100° horizontally.
It would be okay to simplify it to a round FoV of 100°.
My question would be: How can i determine the amount of video-information inside my FoV?
Here is what i know so far:
You can create a sphere on paper and calculate its surface area by using the formulas for spherical caps. -> https://en.wikipedia.org/wiki/Spherical_cap
Also there seems to be a function for the UV-Mapping that is done by Unity (because this is done in Unity). That formula can be found here: https://en.wikipedia.org/wiki/UV_mapping
Any suggestions are welcomed!

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

quartz 2d / openGl / cocos2d image distortion in iphone by moving vertices for 2.5d iphone game

We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.

how to deform image?

Hi Friends
I Want to make a simple gaming Application in which the user hit the car and car breaks from that point means the image get little deformed when the user hit the car image. I know everything could be possible with using of lots of images and get change when user hit that car image but i don't want to use so many images.
is there any solution for this , how can i deform the image ..sorry for my English but , here i paste a link of the game that is on flash and this is what i exactly want..
http://www.playgecogames.com/file.php?f=657&a=popup
please respond soon
thanks
You don't say if this is in 2D or 3D, or what techniques you're going to use.
If you're implementing the game using OpenGL, it's fairly straightforward. The object can be made up of a regular mesh, with the image as a texture mapped to the mesh. When the user hits the object, you just deform the mesh.
A simple method would be to take a vector in the direction of the hit, displace the nearest vertex by an amount proportional to the force of the strike, and then fan out in to deform the rest of the mesh in decreasing amounts. By deforming the mesh, the image texture will be rendered with all the dents or deformations you like.
If you want to to this without OpenGL and just straight images, you could use image resampling to simulate the effect. You have your original pristine image which is 'filtered' to make up the resulting image. At first there are no deformations so you copy the original image verbatim. Each time the user hits the object, you can add a deformation using a filter or transform within a local region of interest. This function would resample the source image in a distorted manner, causing it to look like the object is damaged.
If you look up some good books on game development, you'll find a great range of approaches to object collisions, deformations and so on.
If you know a bit about image processing technics here is the documentation for accessing the pixels of the image :
Apple Reference
You also have libraries for this such as this one :
simple-iphone-image-processing
But for what you want to do this might not be the easiest way. What I would suggest is that you divide the car into several images depending on what areas can be impacted. Then you just change the image corresponding to the damaged zone each time the car is hit.
I think you should use the cocos2d effects http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide%3aeffects + multiple images. Because there are many parts which drops after the player kick the car. Like when user kick the side mirror you should change the car image with without side mirror car image.
The person that has made that flash game used around 4 images to display the car. If you want the game to be in 2d, the easiest way is to draw the car, cut it into about 4 pieces (: left side + right side (duplicate of the left side) hood and roof).
If you want to "really" deform the car you'll have to use a 3d engine like openGLES.
Id really suggest doing it in 2d :)
I suggest having a look at the cocos2d game engine. You can modify images with effects, which are applied using a virtual grid. Have a look at the effects page in their programming guide.

magnifying moving images

im developing a 2d game on iphone in which i want to maginify the views on the screen to give the effect that the user is looking through the sniper whenever the user taps the screen..
i am required to show that the objects which the player wants to shoot should be moving...so im incremeting their position as well as increasing their size...
so i want this magnified image to show the updated positions of the moved objects at runtime to give the effect that the user is looking through the snpier...
scaling the images didnt help me cause it slowed my application a llot as the objects position is updated every 0.01 sec...
please help
Is it a 2d game, or are you using OpenGL ES? If the latter, you can always render your scene to a texture and use the hardware to scale that for you. That shouldn't slow down the game too much...
If 2d, it's always faster to scale one single image than a lot of individual objects, so here it also may be beneficial to render to an image first, and later scale and draw it on the screen.