I have my main game layer that is larger than the screen when the scene currently starts you see the character (a ship in this case) in the screen but I want to show the entire layer to the user first, and then animate back a zoom level of 1.
How can I achieve this? I know I can use the scale property on CCLayer but how can I tell how much of the view I am seeing such that I can show all of it?
Animate your zoom using a CCAction such as CCScaleTo and set the end scale of the zoom in the CCScaleTo action to whatever you want as derived by comparing the screensize to the layer size. For example, to zoom in to a 2X magnification, your CCScaleTo would scale to a 2.0 scale. You could get even fancier and use the size of a particular object in the layer in comparison to the size of the layer and size of the screen to calculate a scale that brings the desired object to exactly the size you want after zooming.
Related
I have a physical camera in the scene which renders a 3D Scene. The 3D Scene is rendered between the Top UI Bar and the Bottom UI Bar and has been set properly for reference resolution 720 x 1280.
So, the problem is when the resolution changes, the UI is set properly for that particular resolution. But the 3D scene that is rendered between the 2 UI parts, doesn't sit properly between them. I am attaching 2 reference images for easier understanding.
The below-given image is based on the reference resolution and the 3D Scene is properly fit between the 2 UI parts.
The below-given image is for another resolution where the UI adjusts itself accordingly. But the 3D scene doesn't, i.e the camera should move to fit the 3D Scene between the UI.
So, is there any way I can move the camera according to the resolution so that it fits properly between the UI for different aspect ratios.
Thank you.
First of all, try looking at the ViewportRect property of the camera - it enables you to crop the viewport to an area which should enable the effect you seek.
This however has the limitation of keeping the 'center scren' axis of perspective, which in some cases is uncalled for.
An 'should always work' solution is rendering to a RednerTexture - camera adapts its screen aspect to the aspect of the RenderTexture, and if you display your texture on RawImage, you should get dececnt results. This however has some performance cost (memory readout is not the fastest on mobile).
In case the cost woul be unnaceptable, its possible to construct custom camera projection matrix, that would respect your desired viewport rect, and also allow arbitrary perspective, but its not trivial unfrotunately. Here is some more information
https://docs.unity3d.com/ScriptReference/Camera-projectionMatrix.html
I'm trying to add some buttons to a gameObject which is a scroll view child. The child is a very wide image, about 4 times as wide as normal screens, which is why I let it control the height, so that it is always from top to bottom, and only exeeds horizontally where scroll is allowed.
Here you can see my setting.
1. is the wide image
2. is a banner that used to be part of the image, but now I want it to be a individual gameObject.
I don't know how I can make the banner (2) inherit the behavior of the "full map" (1), right now when I choose free aspect and resized the game view, the "full map" scales nicely, so the it always fits from top to bottem. The more narrow the screen is, then more of the map will exceed to the right, and vice versa.
However the problem is that then banner (2) has a rect transform, which is set with x,y coordinates relative to the parent which scales. So as i resize around the view, the banner gets out of it's intended position, and also does not scale with the. I have tried many different components, but without luck. Here among, scale constraint, canvas scaler, position constraint
As you can see in this gif, the banner does not scale down, as the island gets smaller, also it stays in the same position horizontally.
Any suggestions on how I can make the children os the map behave as they were part of the map?
Deleted previous, misunderstood the question.
Proposed solution:
You remove the map part from UI and create it as an object: You attach the map image as a texture to (ie) a plane (see a video here). You add the "infinity island" as another texture and have it placed over the island image.
After that, you control the camera to zoom in/out of the island and not struggle with any scaling or moving UI.
I think you need to anchor the images to center of the island mini image, you can drag the anchor gizmo from the scene hierarchy.
If you see it as free aspect it gets buggy and difficult to handle, but if you choose a resolution everything is smooth.
Is this what you want?
Change the resolultion
I am drawing custom annotations over an image (circle, arrow etc) in my iPhone app. I allow pinch zooming & drag gestures for these annotations. These annotations are custom made AnnotationView with drawRect drawing circle / line etc.
I am finding issues with zooming
Try 1: The zoom is applied by scale transformation to the AnnotationView. Result: the annotations become blurred.
Try 2: To avoid blurring, I added redrawing with co-ords multiplied by scale factor instead of directly manipulating the transformation. This works without blurring, but, soon the circle etc goes out of the views bounds & is clipped.
I could use a bigger frame (size of full image), but, I am keeping the smaller frame so that I can easily move it back to position when it is dragged out of the window by user's zoom / pan gestures.
Question:
Is there a better way to manage this? I would like to zoom without blurring, while also having the ability to move it back to position if it is dragged out of original image bounds.
I have an open GL ES (1.1) scene with many 3d objects and a "player" model. I'd like the player to have the same pixel size, regardless of the screen orientation on an Android phone or Iphone.
I'm not using glOrtho or billboards. That's a perspective 3d scene, but I just want the objects to have the same size in both screen orientation. Currently, if I rotate the phone, I keep the same aspect ratio but the scene "zooms out" in landscape mode.
I suspect that I have to play with parameters to glFrustrum to get this; but can't figure out yet how to do it.
So any ideas are welcome!
Thanks
You will need to change the aspect ratio when the device is turned to go from a the otherwise the size of the objects are going to change. THink of yourself looking out through a window, the objects on the other side of the window are only going to be the same size if you don't change your distance from the window (i.e. zooom in and out), when you "turn" the window sidewayse, the aspect ratio of the window changes (the metaphor is starting to not work).
If you draw a square in the view with the side length being the short side of the screen, then you should still have a square when you turn the phone sideways, still covering the same area on the screen.
Things will probably be easier to calculate if you use the code from gluPerspective. You set the aspect ratio to the actual aspect ratio, fix the fovy for the first aspect ratio. You can then use what would be fovx for this aspect ratio as the fovy for your rotated view.
I've recently had some issues implementing a zooming feature into a painting application. Please let me start off by giving you some background information.
First, I started off by modifying Apple's glPaint demo app. I think it's a great source, since it shows you how to set up the EAGLView, etc...
Now, what I wanted to do next, was to implement zooming functionality. After doing some research, I tried two different approaches.
1) use glOrthof
2) change the frame size of my EAGLView.
While both ways allow me to perfectly zoom in / out, I experience different problems, when it actually comes to painting while zoomed in.
When I use (1), I have to render the view like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(left, right, bottom, top, -1.0f, 1.0f); //those values have been previously calculated
glDisable(GL_BLEND);
//I'm using Apple's Texture2D class here to render an image
[_textures[kTexture_MyImage] drawInRect:[self bounds]];
glEnable(GL_BLEND);
[self swapBuffers];
Now, let's assume I zoom in a little, THEN I paint and after that, I want to zoom out again. In order to get this to work, I need to make sure that "kTexture_MyImage" always contains the latest changes. In order to do that, I need to capture the screen contents after changes have been made and merge them with the original image. The problem here is, that when I zoom in, my screen only shows part of the image (enlarged) and I haven't found a proper way to deal with this yet.
I tried to calculate which part of the screen was enlarged, then do the capturing. After that I'd resize this part to its original size and use yet another method to paste it into the original image at the correct position.
Now, I could go more into detail on how I achieved this, but it's really complicated and I figured, there has to be an easier way. There are already several apps out there, that perfectly do, what I'm trying to achieve, so it must be possible.
As far as approach (2) goes, I can avoid most of the above, since I only change the size of my EAGLView window. However, when painting, the strokes are way off their expected position. I probably need take the zoom level into account when painting and re-calculate the CGPoints in a different way.
However, if you have done similar things in the past or can give me a hint, how I could implement zooming into my painting app, I'd really appreciate it.
Thanks in advance.
Yes, it is definitely possible.
When it comes to paint programs, you should be keeping a linked list or tree of objects to draw for easy insertion / removal. When the user stops painting, (i.e. touchesEnded), you add objects to the data structure containing your scene.
When your user zooms you need to modulate the coordinates of the objects you are drawing with respect to the current viewport, projection, and modelview transforms. In your case, you're not changing the viewport or the modelview transforms so you need only account for the projection transform. You could also implement your zoom using a translation and scale on the modelview matrix but I'll ignore that case for simplicity because it involves inverting the transforms.
The good news is that you are using an orthographic projection so world coordinates correspond to window coordinates when no zooming is in effect. The "world" in your case is a simple canvas that probably corresponds to the size of the device in window coordinates.
Before you add an object to your scene data structure, convert all of the coordinates, using the current projection transform (i.e. the parameters to the glOrthof() call) to world coordinates (i.e. full canvas coordinates). You'll only remain sane if you keep all things in your model in the same coordinate space.
To convert the coordinates, assuming you can never zoom out past full device dimensions in your glOrtho() call, you'll have to scale them down proportional to the ratios of your zoomed ortho dimensions to your unzoomed ortho dimensions then bias them by the difference between your zoomed ortho bottom, left values and those of the original unzoomed ortho values.