How to put image as letterbox in video - azure-media-services

When I converted my video using Azure Media Services to format 9x16 letterboxes appeared in the top and the bottom. I used stretchMode: "AutoFit" so letterbox is actually a normal effect here.
Pad the output (with either letterbox or pillar box) to honor the output resolution, while ensuring that the active video region in the output has the same aspect ratio as the input. For example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the output will be at 1280x1280, which contains an inner rectangle of 1280x720 at aspect ratio of 16:9, and pillar box regions 280 pixels wide at the left and right.
However I wonder if this is possible to put an image there instead of having them just black.
My video looks like this:

No, we do not currently support placing background images during a stretch or letterboxing operation. If possible, and you do not have a ton of these videos to process, I recommend running them through a free compositing application like BlackMagic's Davinci Resolve to get the intended effect and then uploading the final output for streaming.

Related

How do i fix the layout problem in unity?

Im trying to build a replica of the game 'Breakout'.
when i build and run the game in unity it looks good and everything id the correct scale but when i build and run the project on my laptop all the elements change size and some of the elements dont change size and i am not sure if i have done anything wrong or if i have forgotten to do something to stop it from changing.
This is what the game looks like in the unity editor when playing the game:
And this is what it looks like once i build and run the project on my computer:
My resolution is set to stand alone.
is there a way to get it to look like it does in the unity editor when i build and run it on my computer because that is what its supposed to look like.
These days, there are many different screen sizes, and resolutions.
My favorite solution is to use a reference resolution that can expand but never shrink. This allows you to have a safe zone that stays consistent across different screen sizes. For a generic case, you would use 16:9 aspect ratio, with the resolution of either 1280x720 or 1920x1080.
In Unity, on your canvas, modify the Canvas Scaler such that UI Scale Mode is set to Scale with Screen Size. Add your reference resolution, and set the Screen Match Mode to Expand.
In the Game tab you can preview what 16:9 looks like. You can try out other aspect ratios or resolutions.
For example, the iPhone 11 has a much wider resolution, so it expands horizontally. It's up to you to design your UI in a way that makes sense. You can either keep everything in the safe zone or align elements to the corners of the screen.

Unity blurry and pixelated sprites in editor (no pixel art)

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

Pixel-perfect shader in Unity ShaderLab

In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)
There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.
is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.
First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.

Xcode does not recognize change in size of image

In my program I have an image which has been scaled down to a size which fits perfectly on the screen. Upon further investigation, I realized those dimensions must be doubled to provide maximum quality in iPhone Retina. I doubled those dimensions by using the original image (which was much larger) so there was no loss in quality. However, when I run my program in iPhone simulator (retina display) the image's quality has not changed whatsoever. Is there any apparent reason why Xcode does not seem to recognize that the image has been updated? Any help is appreciated.
When you have two versions of the image (normal and #2x), use the "normal" name (without #2x) in XIB or UIImage#imageWithName:, system will automatically choose the best version for the current screen. Also check that your UIImageView size corresponds to the normal (not 2x) resolution of your image. There are several content modes (like Aspect Fit, Aspect Fill, Center, etc.) that will resize or position your image in the UIImageView.

Image Processing Question - Converting Standard-Def to Hi-Def, do I have to lose image data?

I'm writing a small program to convert a standard definition 4:3 video to a hi-def video 16:9 and I'm experiencing a serious stretching effect, as expected I suppose (though I didn't think about it until my code started working). Anyhow, the only way I can think of getting around this stretching effect and still fill the whole 16:9 screen is to cut off the top and bottom of the image.
1) So my question is, when converting from SD to HD, do I have to lose image parts of the image in order to fill the whole screen without any stretching effects?
2) Same question for converting from HD to SD.
I'm new image processing, are there any popular approaches to reducing the stretching in these kinds of operations? Is there a smarter approach to this problem than just cutting off parts of the image or introducing black bars to the image?
Thanks in advance for all your help!
Other than the obvious methods of cropping, letterboxing, and pillarboxing, which either lose image data or necessitate potentially-undesirable black bars, there is also adaptive image resizing. Basically, the intent of these techniques is to be able to create a version of an image with an arbitrary aspect ratio, without losing the essential characteristics of the image or distorting it. One technique is called seam carving, and can be seen here.
If you'd like to test the technique out on some images of your own, the functionality is included in recent versions of ImageMagick, as explained here.
Reduction of quality or loss of content is always a problem in resizing images or video. Generally you scale the image in one direction, and either trim or pad the other direction.
On TV it is common to cut off the left and ride side of a 16:9 frame to put is on 4:3 screen, and to add black side bars to go from 4:3 to 16:9. TV editors don't cut off the top and bottom of a 4:3 frame to fit it on 16:9 because there's almost always important parts of the scene there. The far left and far right of a 16:9 frame don't usually have important elements, although in some cinematic scenes losing the sides makes a huge difference.