Image Processing Question - Converting Standard-Def to Hi-Def, do I have to lose image data? - image-manipulation

I'm writing a small program to convert a standard definition 4:3 video to a hi-def video 16:9 and I'm experiencing a serious stretching effect, as expected I suppose (though I didn't think about it until my code started working). Anyhow, the only way I can think of getting around this stretching effect and still fill the whole 16:9 screen is to cut off the top and bottom of the image.
1) So my question is, when converting from SD to HD, do I have to lose image parts of the image in order to fill the whole screen without any stretching effects?
2) Same question for converting from HD to SD.
I'm new image processing, are there any popular approaches to reducing the stretching in these kinds of operations? Is there a smarter approach to this problem than just cutting off parts of the image or introducing black bars to the image?
Thanks in advance for all your help!

Other than the obvious methods of cropping, letterboxing, and pillarboxing, which either lose image data or necessitate potentially-undesirable black bars, there is also adaptive image resizing. Basically, the intent of these techniques is to be able to create a version of an image with an arbitrary aspect ratio, without losing the essential characteristics of the image or distorting it. One technique is called seam carving, and can be seen here.
If you'd like to test the technique out on some images of your own, the functionality is included in recent versions of ImageMagick, as explained here.

Reduction of quality or loss of content is always a problem in resizing images or video. Generally you scale the image in one direction, and either trim or pad the other direction.
On TV it is common to cut off the left and ride side of a 16:9 frame to put is on 4:3 screen, and to add black side bars to go from 4:3 to 16:9. TV editors don't cut off the top and bottom of a 4:3 frame to fit it on 16:9 because there's almost always important parts of the scene there. The far left and far right of a 16:9 frame don't usually have important elements, although in some cinematic scenes losing the sides makes a huge difference.

Related

2D sprite problem when setting up an instant messaging UI

I'm new to Unity and to game development in general.
I would like to make a text-based game.
I'm looking to reproduce the behavior of an instant messenger like messenger or whatapp.
I made the choice to use the Unity UI system for the pre-made components like the rect scroll.
But this choice led me to the following problem:
I have "bubbles" of dialogs, which must be able to grow in width as well as in height with the size of the text. Fig.1
I immediately tried to use VectorGraphics to import .svg with the idea to move runtime the points of my curves of Beziers.
But I did not find how to access these points and edit them runtime.
I then found the "Sprite shapes" but they are not part of the "UI",
so if I went with such a solution, I would have to reimplement
scroll, buttons etc...
I thought of cutting my speech bubble in 7 parts Fig.2 and scaling it according to the text size. But I have the feeling that this is very heavy for not much.
Finally I wonder if a hybrid solution would not be the best, use the
UI for scrolling, get transforms and inject them into Shape sprites
(outside the Canvas).
If it is possible to do 1. and then I would be very grateful for an example.
If not 2. 3. 4. seem feasible, I would like to have your opinion on the most relevant of the 3.
Thanks in advance.
There is a simpler and quite elegant solution to your problem that uses nothing but the sprite itself (or rather the design of the sprite).
Take a look at 9-slicing Sprites from the official unity documentation.
With the Sprite Editor you can create borders around the "core" of your speech bubble. Since these speech bubbles are usually colored in a single color and contain nothing else, the ImageType: Sliced would be the perfect solution for what you have in mind. I've created a small Example Sprite to explain in more detail how to approach this:
The sprite itself is 512 pixels wide and 512 pixels high. Each of the cubes missing from the edges is 8x8 pixels, so the top, bottom, and left borders are 3x8=24 pixels deep. The right side has an extra 16 pixels of space to represent a small "tail" on the bubble (bottom right corner). So, we have 4 borders: top=24, bottom=24, left=24 and right=40 pixels. After importing such a sprite, we just have to set its MeshType to FullRect, click Apply and set the 4 borders using the Sprite Editor (don't forget to Apply them too). The last thing to do is to use the sprite in an Image Component on the Canvas and set the ImageType of this Component to Sliced. Now you can scale/warp the Image as much as you like - the border will always keep its original size without deforming. And since your bubble has a solid "core", the Sliced option will stretch this core unnoticed.
Edit: When scaling the Image you must use its Width and Height instead of the (1,1,1)-based Scale, because the Scale might still distort your Image. Also, here is another screenshot showing the results in different sizes.

Webdesign: PNG-Image width limitations

I want to make a kind of slideshow based on scrolling the webpage.
My problem is that I have an image width of 78720x1015px in png-format.
The width of the image is determined by one single image of 1920px which is 41 times arranged next to each other. - It should be like a cartoon where an image moves by 100% (margin: -100%) and generates a feeling of a movie.
However, this results in an image width of 1920px x 41pics = 78720px.
This is of an enormous width, but what I am wondering about is that the filesize is only 975kB which is in my opinion not that big!? - However, somehow it takes a very long time to load the picture in the Webbrowser and the image is not of such quality as in my ImageViewer on Desktop.
Question 1: What do I have to consider when dealing with such a big image-width? What are the limits?
Question 2: Is there a better way make such kind of a slideshow? - Consider that the sliding itself shouldn't be visible. It should be like a movie based on about 40 pictures.
Thanks in advance!
PNG compresses very well, especially if it's a cartoon like you say that may use only a limited number of colours.
However, when loaded into memory, the device must load all that pixel data into RAM to display it. That's almost 80 million pixels in your case, which would be around 320Mb of uncompressed data. This is probably why the browser is struggling, and especially so if you're using margin to move it around as that requires a full re-draw of the image.
You may have better results with transform, as this should use hardware acceleration and also avoids the reflow part of a redraw since transforms don't affect page layout.
But the better solution would be to break it down into individual images. Have your code load in the next image, scroll it across, then load the next while scrolling and unload the one that's now off-screen to provide a relatively seamless view.

UITableViewCell Background stretching

I have a problem in Swift with UITableViewCell background image. Let's say that it shows quite fine one iPhone5 but on iPhone 6Plus it is stretch and thus it looks bad. This is probably due to Aspect fill or something which I really couldn't manage to change and achieve what I want so would be the best if someone could poke sample code I am providing as well as image how it should look, so anyone can check it out and maybe give me some hint or tip or sample code or even fixed demo version.
So here it is:
Note that on left side there is a curve (like half a circle) around right side of icon. On bigger phones or tablet, that curve gets super stretch thus completely destroying the look.
Demo code link
Thanks all in advance for any help. I am pretty stuck with this one.
You can stretch an image while preserving the aspect ratio of a portion of that image using slicing. Xcode offers a graphical interface for doing this. The idea is that you decide what parts of the image are allowed to stretch and which aren't.
https://developer.apple.com/library/ios/recipes/xcode_help-image_catalog-1.0/chapters/SlicinganImage.html

Loading levels from xml and supporting screen resolutions in Ios

I am developing a game that uses levels. The levels are made at a default scene width and height resolution.
The thing i am worried about is when the game is played on IPads iphone 5′s etc, the position of sprites loaded from the level xml files will be out of place due to the screen size.
In my case, could someone tell me the best thing to do in this situation or some advice on the approach i should take?
Also if any has experienced this, please let me know.
Thanks. :)
Generally this has nothing to do with how or where you store the level data.
These are the standard approaches, which one works for you depends on your requirements and desired results:
design each screen resolution individually (error-prone, tedious)
design for one screen resolution, then scale up or down according to screen aspect ratio (can lead to skewing as the screen size scaling for width & height are likely to be different)
design for the smallest screen resolution, then center the contents on the screen (this leaves unused areas either at the top/bottom or left/right sides)
same as above, but zoom content to fill the screen (this will remove the letterboxing, but also partially remove one side's content from the view)
In essence this is the same problem as movies have in trying to fit to screens of varying aspect ratios.
In general this is all a matter of scaling the input (positions) to a desirable output. The easiest approach is to maintain aspect ratio and allow for letterboxing. However Apple may reject letterboxing apps if nothing is done to hide the letterboxes (black areas) since Apple requires apps to support widescreen resolution, and letterboxing does not normally fall into their definition of "supporting widescreen".

PVR Texture Compression Tiling (exposing edge context)

I've got PVR texture compression working all happy and good in my iPhone game, but I've got issues when tiling multiple textures together. Basically, I've got a very large background which is split into multiple 512x512 tiles, all PVR compressed. Then they're drawn together to look like one big background image. The way PVR works, because it doesn't know that it's supposed to be compressing the texture as if it were a really big texture - i.e. use a neighbor's tiled information to determine how to perform the PVR compression.
I can think of maybe a couple ways to do this.
1) Somehow tell the texturetool command line program to accommodate for other images that will be adjacent.
2) Use the command line program to generate a huge PVR texture that represents the whole image, then somehow split up the bytes into multiple images - probably impossible.
3) Do some kind of OpenGL ES trickiness that blends the edges nicely.
4) Do some trickiness where I have redundant information in each tile and then clip those areas when the texture is drawn (please no).
Hopefully I can do 1, 2 or 3, or there is some other well known solution.
I ended up going with option 4. I don't think this was a situation where PVRTC isn't appropriate - in fact it's almost a necessity. When I've got a total of 24 512x512 textures in memory at once (representing a very large background and foreground), putting those in uncompressed is suicide. So I simply used PVR compression as normal, then my edited a few lines of code in my tiling algorithm so that they overlap and trim on 15 pixels on each end. Voila, looks great. Took a couple days and was pretty annoying, but I think this is a good option for people who need very large tiled backgrounds on the iPhone.
My best advice, but not what you asked, is to know when PVRTC isn't appropriate. By far the simplest solution is to just not use PVRTC for those tiles. I've spent a lot of time trying to bend PVRTC to work in situations it just isn't suited for.
That said,
When using PVRTC, the texture is always assumed to tile (with itself), thus pixels on the right edge affect the pixels on the left edge (same with top & bottom). So choices 1 or 2 likely won't work.
One possibility is to add an alpha channel to the tiles and allow them to fade out around the edges so that when you overlap them, they fade into each other. Keep in mind PVRTC tends to work better with gradual alpha fades. Hard alpha edges often have artifacts in PVRTC.