Unity aspect ration 80:9 - unity3d

I have 5 monitors that are connected as a single monitor. Each monitor is full hd and has 1920 on 1080 resolution. In complex they are 9600 on 1080.
So my aspect ratio is 80:9.
When i create level and use standard camera it stretches my textures and meshes on the monitor so it looks ugly. When i use 3 or five cameras the picture is great but i have some cond of distortions on the edges. It looks like the horizontal line is a little bit broken. You can see it on the picture.
I tried a lot of different solutions but it is not working for me.
Is there any advices or solutions that can help me with it?
Thanks

Related

Why is my pixel art scaling wrong on other resolutions?

My pixel art assets all have a ppu of 32 and i have applied that to them all.
On 1920x1080, the resolution is fine, but most other resolutions the sprites pixels are squashed and stretched. I also have set the correct camera size using vertical resolution / PPU / 2). I have also used other formulas and they all give me the same camera size so i'm sure that's not the issue.
I have two moniters, one at 1920x1080, and one at 1360x768, and that moniter is where it scales wrong. Is there a way i can keep the pixels scaling the same across all resolutions? I have tried the pixel perfect camera and this doesn't fix my issue either.
https://imgur.com/gallery/Uf8YqNi Here is a sample of both resolutions, if you open them in a new tab and zoom in on the sprite you can see how tey get distorted in the 1360x768 screen.
Filter mode should be Point and compression None. If this doesn't fix it, then you should use Pixel Perfect camera. It will help you with the animations to be pixel perfect as well.
Window -> Package Manager -> Advanced -> Show preview packages -> 2D Pixel Perfect -> Instlal. Then you just add Pixel Perfect Camera component to your Camera.

Textures look blurry up close but good from a distance Unity

This issue seems to be the reverse of what others have when researching this. My material looks great from some distance but blurry up close at all angles. My image is 1024 x 1024 px. How do I fix this? I display my import settings in the screenshots. Even though max size is 2048 I also tried 1024 with the same result.
Using "Filter Mode : Point" gives sharp pixels,
or adjusting Aniso Level "Increases texture quality when viewing the texture at a steep angle"
More info about each of those settings:
http://docs.unity3d.com/Manual/class-TextureImporter.html
Or if you add tiling to the material (default is 1, try 10), it looks better when viewed near, but can cause issues if its not seamless texture..

Unity Viewport size to Full HD?

at the moment I start with Unity in 2D development for android. Before I develop in Unity i develop with LibGDX where I can the Viewport to a static screen resolution at 1080 * 1920, if the game starts on a smaller device for example 480 * 800 the game still looks pretty good. In Unity when I use a Orthographics Camera and set the width to 1080 and the height to 1920 it looks like a equal long quadrate and not portrait.
How can I use a static Camera which the Viewport is 1080 * 1920 and for other devices unity self charge the resolution for the game?
Sorry for my bad english :(
greetings coco07!
You can't. You're thinking about 2D engines. Unity is a 3D engine with a 2D extension (sort of). You set your camera up (doesn't matter what orthographic size you use), and it gets scaled to the viewport of the device it's running on. the size you set is the size of the viewport, in world units, along the y axis of the device's display. The width is set automatically based on the device's aspect ratio. You can see this by resizing the game window in unity. Doing that causes the visualization of the camera's bounds to change. To create a static camera, you'd have to manually add black borders around the edges (or at least I think you do), OR, better yet, create your game in a manner that runs on all aspects. You should consider aspect ratios between 4:3 and 16:9 for landscape and 3:4 to 9:16 for portrait games. This is specially important since many recent devices use on-screen system buttons (such as the google nexus and Xperia Z series) which means you get an aspect ratio of a little bit less that 16:9 (or a bit more, on tablets). This doesn't sound like much, but if you down-scale a 16:9 image to fit on those devices' screens, it looks ugly as hell.

OpenGL ES. Scrolling 3 layer starfield textures gets me from 60 -> 40 FPS

I need to draw the background for a 2D space scrolling shooter. I need to implement 3 layers of stars: one distant nebula (moving really slow) in the background, one layer of far away stars (moving slow) and one layer of close stars (moving normal) on top of the other two.
The way i first tried this was using 3 textures of 320 x 480 that were transparent pngs of the stars. I used GL_BLEND and SRC_ALPHA, ONE_MINUS_SRC_ALPHA.
The results were not great even on the 3GS. On the first generation devices the FPS dropped to 40..50 so i think i'm doing this the wrong way.
When i disable the GL_BLEND everything works great even on the 1st gen devices and the FPS is back to 60 again... so it's must be the fact that i'm trying to belnd large transparent textures.
The problem is i don't know how to do it some other way...
Should i draw only the first nebula like an opaque texture and then try to emulate the middle and top star layer with small points moving around the screen?
Is there any other approach on the blending issue? How can i speed up the rendering process? Is one big texture (tileset) the answer?
Please help me cuz i'm stuck here and i can't get out.
I don't know how you want your stars to look like, but you might want to try to move them from a texture to geometry by using GL_POINTS in the DrawElements or DrawArrays maybe just replace the top two layers with layers of geometry. You can manipulate the points using PointSize, PointSizePointerOES and PointParameter to modify the rendering of the points.
You might want to use multi-texturing to see if that speeds it up. Each multi-texture stage can be assigned a unique transformation matrix, so you should be able to translate each layer at different speeds.
I believe all iPhone models support two texture stages, so you should be able to combine two of your layers into a single draw call. You might still need to resort to blending for the third layer.
Also note that alpha testing could be faster than alpha blending.
Good luck!
The back nebula should definitely be opaque; everything else is getting drawn on top of it, and I assume the only thing behind it is black. Also, prideout has a point: assuming your star layers can have effectively 1-bit alpha, that's definitely something you can try. Failing that, the GL_POINTS technique Harald mentions would work as well.

Why do images for textures on the iPhone need to have power-of-two dimensions?

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?
The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.
Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.
You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.
I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.
Are you using PVRTC compression? That requires powers of 2 and square images.
Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.