Techniques to get greater levels of zoom in a iphone 3D engine? - iphone

I have a 3D globe I'm rendering using a single 2048x2048 texture (supposedly the max resolution for iphone/ipad). This limits the amount of zoom possible where at some point the 2048x2048 doesn't give me enough resolution. Can someone point me to some LOD techniques that may be used to achieve better resolution on zoom? Thanks!

You might want to take a look at a technique called "Virtual Texturing". It is used in video games such as Rage developed by id Software to simulate very large textures.
Googling these terms should give you plenty of results such as this one: http://www.silverspaceship.com/src/svt/
Hope this helps.

Related

Using tiles as texture when zooming in on Object in Unity3d

I have a problem with an smartphone (android so far) app I am programming in unity3d.
I got the following set up:
I have a sphere on which, so far, I have a texture of the world. It's resolution is already set to the maximum but when I "zoom in" on it it just looks disgusting. I'd like to reload texture tiles like in Google maps to have a higher resolution without having a too big impact on the memory. Is that approach a good one at all? And if so: how do I do that?
Haven't found the right things so far or I just got them wrong.
EDIT: As some people probably get me wrong I try it again with some more details.
What I am trying to set up is the Earth as a sphere with a map. That map is okay as long as you dont zoom in too far, which is obvious because of the textures finit resolution. Now, to still have good graphic quality, when being zoomed in that far that one could see streets in a city I want to load additional "satelite pictures" like Google Maps does it (you know: you zoom in and Google Maps always reloads the images in that specific area to still provide a good graphic quality). How do I achieve that specific behaviour? Providing all the tiles i need is no problem, I got a Vector Graphic from which I could export all the needed tiles, but i don't know how to reload those tiles when zooming in at that specific area (to reduce memory consumption, drawcalls, etc).
Any help is really appreciated as I am a beginner in programming with unity. Thank you very much in advance!
Dustin
Are you actually zooming in or moving the camera physically closer? Unity has some internal filters that make textures easier to load at a longer distance. When zooming, the camera isnt actually moving closer meaning the more detailed version of it will not load correctly.
Also check your Max Texture Size setting for the texture you are using. Maybe it is turned down too low. Try turning it up to a higher resolution.
If it still looks blurry up close with Max Texture Size turned higher, your texture may not be high enough resolution to begin with. Make it a high enough resolution that it looks good up close. Then begin lowering the Max Texture Size resolution until up close it is "good enough".
Next, typically, it will start looking blurry in the distance or have distinct linear transitions from looking good to blurry. You can fix this by turning up the Ansio Level. Use this with caution because it increases the rendering load. Only use what you need, especially for mobile games.

Taking Depth Image From Iphone (or consumer camera)

I have read that it's possible to create a depth image from a stereo camera setup (where two cameras of identical focal length/aperture/other camera settings take photographs of an object from an angle).
Would it be possible to take two snapshots almost immediately after each other(on the iPhone for example) and use the differences between the two pictures to develop a depth image?
Small amounts of hand-movement and shaking will obviously rock the camera creating some angular displacement, and perhaps that displacement can be calculated by looking at the general angle of displacement of features detected in both photographs.
Another way to look at this problem is as structure-from-motion, a nice review of which can be found here.
Generally speaking, resolving spatial correspondence can also be factored as a temporal correspondence problem. If the scene doesn't change, then taking two images simultaneously from different viewpoints - as in stereo - is effectively the same as taking two images using the same camera but moved over time between the viewpoints.
I recently came upon a nice toy example of this in practice - implemented using OpenCV. The article includes some links to other, more robust, implementations.
For a deeper understanding I would recommend you get hold of an actual copy of Hartley and Zisserman's "Multiple View Geometry in Computer Vision" book.
You could probably come up with a very crude depth map from a "cha-cha" stereo image (as it's known in 3D photography circles) but it would be very crude at best.
Matching up the images is EXTREMELY CPU-intensive.
An iPhone is not a great device for doing the number-crunching. It's CPU isn't that fast, and memory bandwidth isn't great either.
Once Apple lets us use OpenCL on iOS you could write OpenCL code, which would help some.

Open GL ES Texture image quality

I have been having a bit of trouble with this issue with a while and have decided to ask for help!
I have a textured 1024 x 1024 area in my iphone application. I am texturing it using an image that i converted to .pvr4 format using Apples texturetool.
Now the user has the option of zooming in on this textured object....
The issue is that the quality of the image is not good enough when it is at the highest zoom level.
How can i improve this?
Should i be looking at mip mapping?
Any pointers in the right direction would be greatly appreciated.
Thanks
Tom
If you zoom in enough on any texture, you won't see much detail.
Here are your options:
You can change the magnification filtering mode so that instead of being blocky (nearest filtering), it's blurry (linear filtering). (Or vice-versa.)
Don't use PVR texture compression. This will make your texture more detailed. (But at the usual cost: larger size, slower rendering.)
When zooming in, switch to a more detailed texture for that specific area.
The last option is the most work, but will likely give the best results. (But try the other two, they're easy.)
Also, mip-mapping likely won't help your situation. In general, enabling mip-mapping can make textured objects far away from the camera look better. It usually doesn't have any effect when the camera is close to a textured object.
PVR4-compression is rather destructive. Simply choose another format if texture quality is crucial.

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

Why do images for textures on the iPhone need to have power-of-two dimensions?

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?
The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.
Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.
You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.
I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.
Are you using PVRTC compression? That requires powers of 2 and square images.
Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.