I am having a hard time figuring out how to find correct rect that represents a ui element area. I get the Ui element RectTransform. I tried to use it directly, using a function and RectTransformUtility, but nothing seems to wok.
Here is the code.
RenderTexture.active = scr;
//adjust rect position
//Rect rect = RectUtils.RectTransformToScreenSpace (rectTransform);
int width = System.Convert.ToInt32 (rect.width);
int height = System.Convert.ToInt32 (rect.height);
// Create a texture the size of the screen, RGB24 format
Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
RenderTexture.active = scr;
// Read screen contents into the texture
tex.ReadPixels(rect, 0, 0);
tex.Apply();
RenderTexture.active = null;
// Encode texture into PNG
byte[] bytes = tex.EncodeToPNG();
//todo destroy the texture
Object.Destroy (tex);
// For testing purposes, also write to a file in the project folder
File.WriteAllBytes(path, bytes);
I tried to create rect in different ways like:
Methood 1:
Vector2 size = Vector2.Scale(transform.rect.size, transform.lossyScale);
Rect rect = new Rect(transform.position.x, Screen.height - transform.position.y, size.x, size.y);
rect.x -= (transform.pivot.x * size.x);
rect.y -= ((1.0f - transform.pivot.y) * size.y);
return rect;
Method 2:
Vector2 temp = rectT.transform.position;
var startX = temp.x - width/2;
var startY = temp.y - height/2;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
tex.ReadPixels (new Rect(startX, startY, width, height), 0, 0);
I am using Unity 5.5 Mac version.
when i pass to ReadPixels new Rect (0, 0, Screen.width, Screen.height) i see the whole RenderTexture in its defined dimensions 1920x1080
I want to use this as a baker of UI elements as I am having performance issue, and definitively no other solution to implement the required behavior.
I also struggled with this problem. Here's how I solved it.
First we need the position of RectTransform in screen and for this I used RectTransformUtility. Then since the rect x, y is the starting point at left bottom (note: in GUI x, y is at left top) we subtract respectively the half of width and height.
Vector2 rectPos = RectTransformUtility.WorldToScreenPoint (Camera.main, rectTransform.position);
float startX = rectPos.x - width / 2;
float startY = rectPos.y - height / 2;
Rect screenRect = new Rect (startX, startY, width, height);
Note: width height should be integer.
Edit: The above solution when we use Screen overlay and change resolution from standard to any other do not work. Even if I used the lossyscale I could not get consistent behavior.
One method of RecTransformUtility is really helpful for this situation (WorldToScreenPoint).
Vector3[] bounds = new Vector3[4];
rectTransform.GetWorldCorners (bounds);
Vector2 rectPos = RectTransformUtility.WorldToScreenPoint (camera, rectTransform.position);
Vector2 minPosition = RectTransformUtility.WorldToScreenPoint (camera, bounds[0]);
Vector2 maxPosition = RectTransformUtility.WorldToScreenPoint (camera, bounds[2]);
Vector2 size = maxPosition - minPosition;
float startX = rectPos.x - size.x / 2;
float startY = rectPos.y - size.y / 2;
return new Rect (startX, startY, size.x, size.y);
Related
I have a square sprite (width == height) and I want to scale it so that both width and height are exactly one fifth of the width of screen.
To find the desired pixel width I do:
float desiredWidthPixels = Screen.width * 0.2f;
float desiredHeightPixels = Screen.width * 0.2f;
How do I apply these values to the sprite?
It depends if your camera is Orthographic or not.
You will have to Scale the object up or down, depending on its original size.
float cameraHeight = Camera.main.orthographicSize * 2;
float cameraWidth = cameraHeight * Screen.width / Screen.height; // cameraHeight * aspect ratio
gameObject.transform.localScale = Vector3.one * cameraHeight / 5.0f;
The code goes into a Script file attached to the GameObject (in your case the sprite) since gameObject means the Game Object it is on. If you are doing it from another script file then you will need to find the object first in the tree before scaling it.
In unity I am trying to scale the scene to fit the screen size without loosing it’s aspect ratio. I have tried one solution of aspect utility but it is not working properly, it is showing black strips and so UI do not look good.
I want to target both devices android as well as iPad( e.g 16:9, 4:3 ratio)
Can anybody guide me how to achieve scaling on any kind of devices?
You can use NGUI plugin and attach UIStretchScript on the Image.
try this...name the script as "camera.cs"....add it to your camera...and paste the following code :
using UnityEngine;
using System.Collections;
public class camera : MonoBehaviour {
// Use this for initialization
void Start ()
{
// set the desired aspect ratio (the values in this example are
// hard-coded for 16:9, but you could make them into public
// variables instead so you can set them at design time)
float targetaspect = 16.0f / 9.0f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
else // add pillarbox
{
float scalewidth = 1.0f / scaleheight;
Rect rect = camera.rect;
rect.width = scalewidth;
rect.height = 1.0f;
rect.x = (1.0f - scalewidth) / 2.0f;
rect.y = 0;
camera.rect = rect;
}
}
// Update is called once per frame
void Update () {
}
}
There's only two ways to maintain the game's aspect ratio with differing viewport aspect ratios. First, stretching in a given direction, which is never a good option. Second, letterboxing (black bars), which can affect usability, especially on handheld screens. My recommendation would be to allow the game view to scale according to the screen's aspect ratio (which I think is the default functionality in Unity), and design the GUI to be responsive to the screen size (i.e. don't draw GUI elements with pixel specific coords, but with coords relative to the screen size).
I am using ffmpeg to play video on iOS 5.0. In my app with ffmpeg decoded video frames and use OpenGL to display it.
But I have a problem I don't resolve it. Chains logos and subtitles of the video image is displayed in reverse. I think that is the problem of rendering OpenGL 2.0 or ffmpeg decoded.
Can you tell me what is wrong?, and How I can fix it?
Very thanks,
Edit: I change my prepareTExture method with this:
- (void) prepareTextureW: (GLuint) texW textureHeight: (GLuint) texH frameWidth: (GLuint) frameW frameHeight: (GLuint) frameH {
float aspect = (float)frameW/(float)frameH;
float minX=-1.f, minY=-1.f, maxX=1.f, maxY=1.f;
float scale ;
if(aspect>=(float)backingHeight/(float)backingWidth){
// Aspect ratio will retain width.
scale = (float)backingHeight / (float) frameW;
maxY = ((float)frameH * scale) / (float) backingWidth;
minY = -maxY;
} else {
// Retain height.
scale = (float) backingWidth / (float) frameW;
maxX = ((float) frameW * scale) / (float) backingHeight;
minX = -maxX;
}
if(frameTexture) glDeleteTextures(1, &frameTexture);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &frameTexture);
glBindTexture(GL_TEXTURE_2D, frameTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texW, texH, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, NULL);
verts[0] = maxX;
verts[1] = maxY;
verts[2] = minX;
verts[3] = maxY;
verts[4] = maxX;
verts[5] = minY;
verts[6] = minX;
verts[7] = minY;
float s = (float) frameW / (float) texW;
float t = (float) frameH / (float) texH;
texCoords[0] = 0.f; texCoords[1] = 1.f;
texCoords[2] = 1; texCoords[3] = 1.f;
texCoords[4] = 0.f; texCoords[5] =0;
texCoords[6] = 1; texCoords[7] =0;
mFrameH = frameH;
mFrameW = frameW;
mTexH = texH;
mTexW = texW;
maxS = s;
maxT = t;
// Just supporting one rotation direction, landscape left. Rotate Z by 90 degrees.
matSetRotZ(&rot,M_PI_2);
matMul(&mvp, &rot, &rot);
[self setupShader];
}
And now this is my result: link image
But I have a problem I don't resolve it. Chains logos and subtitles of the video image is displayed in reverse.
The whole image is mirrored, not just chain logo and subtitles. Looks like wrong texture coordinates to me. Could you please post your drawing code?
EDIT due to question update
Phew I first had to understand what you do there, it's overly complicated. Just use texture coordinates 0 and 1, don't try to outsmart yourself by calculating some s, t. Next step. Don't use a perspective projection, unless you indend to render something in perspective.
Upon your original problem. OpenGL assumes the origin of an image to be in the lower left, while most video formats put the origin into the upper left. What you did was rotating the picture, but while this will turn it upright, it will leave it mirrored. Instead you want to mirror it along the T texture coordinates, which is easily accomplished by using a negative value for t.
I'm trying to set up a collision type hit test for a defined of pixels within a UIImageView. I'm only wish to cycle through pixels in a defined area.
Here's what I have so far:
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Is someone able to offer any reason why it isn't working?
I would suggest using ANImageBitmapRep. It allows for easy pixel-level manipulation of images without the hassle of context, linking against other libraries, or raw memory allocation. To create an ANImgaeBitmapRep with the contents of a view, you could do something like this:
BMPoint sizePt = BMPointMake((int)self.frame.size.width,
(int)self.frame.size.height);
ANImageBitmapRep * irep = [[ANImageBitmapRep alloc] initWithSize:sizePt];
CGContextRef ctx = [irep context];
[self.layer renderInContext:context];
[irep setNeedsUpdate:YES];
Then, you can crop out your desired rectangle. Note that coordinates are relative to the bottom left corner of the view:
// assuming aFrame is our frame
CGRect cFrame = CGRectMake(aFrame.origin.x,
self.frame.size.height - (aFrame.origin.y + aFrame.size.height),
aFrame.size.width, aFrame.size.height);
[irep cropFrame:];
Finally, you can find the percentage of alpha in the image using the following:
double totalAlpha;
double totalPixels;
for (int x = 0; x < [irep bitmapSize].x; x++) {
for (int y = 0; y < [irep bitmapSize].y; y++) {
totalAlpha += [irep getPixelAtPoint:BMPointMake(x, y)].alpha;
totalPixels += 1;
}
}
double alphaPct = totalAlpha / totalPixels;
You can then use the alphaPct variable as a percentage from 0 to 1. Note that, to prevent leaks, you must release the ANImageBitmapRep object using release: [irep release].
Hope that I helped. Image data is a fun and interesting field when it comes to iOS development.
I have a cube on the screen, I want to give the effect of zooming out so I was tweaking the frustum to be larger and larger. Here is the code:
- (void)Draw {
EAGLView* videoController = [EAGLView Instance];
[videoController BeginDraw];
glClearColor(0.1f, 0.7f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, videoController.mBackingWidth, videoController.mBackingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float mAspectRatio = 1.6666;
static float mHalfViewAngleTan = 0.1;
mHalfViewAngleTan += 1.1;
float mNearZClip = 1.0;
float mFarZClip = 1000.0;
glFrustumf( mAspectRatio*-mHalfViewAngleTan, mAspectRatio*mHalfViewAngleTan, -mHalfViewAngleTan, mHalfViewAngleTan, mNearZClip, mFarZClip );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
static float rotZ = 0.0f;
++rotZ;
if(rotZ > 360)
rotZ = 0;
glRotatef(rotZ, 0, 0.5, 0.5);
RenderModel(modelObj);
[videoController EndDraw];
}
The glRotate is working correctly. However as mHalfViewAngleTan gets larger nothing seems to be happening, the scene changes in no noticeable way. I have tried smaller and larger numbers for the amount mHalfViewAngleTan increase per frame. Changing the Near and Far plane are also working correctly.
There are no glMatrixMode/glPushMatrix calls inside RenderModel. It Enables and Disables client state, sets up glVertPointer's and call glDrawArray.
All this code is in an .mm file calling into .cpp files.
I had this problem in the past. I solved it by directly including the Open GL 1.1 or 2.2 headers. Not quite sure why that fixed it.
Set mNearZClip to something greater than zero. See the Notes.