MonoGame Different Size RenderTargets and Scaling issues - monogame

I've got some different controls I have that I'm drawing to different size render targets.
If I draw a 64x64 textureA to an 800x600 render target like this.
Graphics.SetRenderTarget(Canvas);
_spriteBatch.Draw(textureA, new Rectangle(0, 0, 64, 64), srcRectangle, Color.White)
And then draw the 800x600 render target to a 1600x900 screen with a call like this
Graphics.SetRenderTarget(null);
_spriteBatch.Draw(Canvas, new Rectangle(100,100,800,600), Color.White
Why is it drawing my textureA very warped and small on the screen.
If I make my Canvas the same size as the backbuffer then it shows up fine, but I'm wondering why this thinks it should shrink my texture, and is there a way to turn this off?
I want to able to create render targets of arbitrary sizes, and then draw them to the screen at their original sizes. Is this possible?

When you draw to a RenderTarget, the back buffer (or preferred back buffer) is still used. If you are scaling down from the back buffer dimension then any texture that is drawn to the RenderTarget will also be scaled and modified by any changes in the aspect ratio.
If you want to keep your textures in the same aspect, you will need to track the scale and aspect ratio to apply to the destination rectangle when you draw. Would be happy to elaborate or provide code snippets if you could elaborate a bit more on what you are trying to accomplish.

Related

How to keep the same resolution of canvas on different screen resolution?

First of all, sry for my English mistakes, I'm not a native English speaker.
I'm trying to make an UI which is composed of a canvas within different gameObject, and I would like that my canvas scales to the dimension of the screen but keeps its original resolution (16x9 portrait). If it is displayed on a tablet resolution (4x3) then an image is displayed in the space that is not covered by the canvas.
But actually all I've got is a canvas which scales to every resolution, and it changes the aspect of its child (for example a square becomes a rectangle).
Thank you for showing interest in my query!
UI's are heavy beasts. Canvas in Unity have a component attached to themm called Canvas Scaler which is set by default on Constant Pixel Size. You may try to set this property on Scale With Screen Size and then specify the base resolution you want to work with (usually 1920x1080 is a console standart). This is your first step
Then, to avoid strange Image scaling, you may check the property Preserve Aspect, this way the ration of the Sprite into your Image will remain the same indepently of the ratio of the Image
Last, you may play a bit with anchors but this is another story, you should let those at plain center at the beginning and come back to it when you will feel ready
Hope that helped ;)
Your canvas should have a component called Canvas Scaler. Here it should say Constant Pixel Size, change this to Scale With Screen Size and it should lock the Canvas to be the same width / height as the screen. If you want to lock an image to a specific width/height ratio, go to the Image component on the image and check the Preserve Aspect checkbox. This way if you have a 100x100 image, the images width will always be the same as the height. If you have a 200x100 image, the images width will always be twice the height, etc etc etc.

Physical Camera Viewport should adjust with the UI for different resolution

I have a physical camera in the scene which renders a 3D Scene. The 3D Scene is rendered between the Top UI Bar and the Bottom UI Bar and has been set properly for reference resolution 720 x 1280.
So, the problem is when the resolution changes, the UI is set properly for that particular resolution. But the 3D scene that is rendered between the 2 UI parts, doesn't sit properly between them. I am attaching 2 reference images for easier understanding.
The below-given image is based on the reference resolution and the 3D Scene is properly fit between the 2 UI parts.
The below-given image is for another resolution where the UI adjusts itself accordingly. But the 3D scene doesn't, i.e the camera should move to fit the 3D Scene between the UI.
So, is there any way I can move the camera according to the resolution so that it fits properly between the UI for different aspect ratios.
Thank you.
First of all, try looking at the ViewportRect property of the camera - it enables you to crop the viewport to an area which should enable the effect you seek.
This however has the limitation of keeping the 'center scren' axis of perspective, which in some cases is uncalled for.
An 'should always work' solution is rendering to a RednerTexture - camera adapts its screen aspect to the aspect of the RenderTexture, and if you display your texture on RawImage, you should get dececnt results. This however has some performance cost (memory readout is not the fastest on mobile).
In case the cost woul be unnaceptable, its possible to construct custom camera projection matrix, that would respect your desired viewport rect, and also allow arbitrary perspective, but its not trivial unfrotunately. Here is some more information
https://docs.unity3d.com/ScriptReference/Camera-projectionMatrix.html

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

zoom in the GLPaint sample code

i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i donĀ“t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.

How to draw a light effect over a texture on iPhone using UIKit/Quartz

I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.