I'm a relative Unity noob. I have a fairly simple scene. Currently in the following you will see a plane (object WorldTilemapGfx) and 2 sprites (Tile C: 0 R: 0, and Tile C: 1 R: 0).
In the following picture you see I've selected one of the sprites. Its scale is 1 x 1, and its at position 1, 0.
Now I select the other sprite.
So far the positions and sizes seem ok.
Now if I select the game object with a "plane" mesh it shows in the inspector as scale 2, 1. This is the scale I expect since it is supposed to be as wide as two of the tiles above, and as high as only 1 of them.
However its visually 10 times too big.
If I increase the X scale of one of my tiles by 10, then the relative sizes between tile and plane look ok
Also the image used for my tile is 256 x 256.
Can someone suggest what I am missing? Thanks.
See Unity Mesh Primitives
Plane
This is a flat square with edges ten units long oriented in the XZ plane of the local coordinate space. It is textured so that the whole image appears exactly once within the square. A plane is useful for most kinds of flat surface, such as floors and walls. A surface is also needed sometimes for showing images or movies in GUI and special effects. Although a plane can be used for things like this, the simpler quad primitive is often a more natural fit to the task.
whereas
Quad
The quad primitive resembles the plane but its edges are only one unit long and the surface is oriented in the XY plane of the local coordinate space. Also, a quad is divided into just two triangles whereas the plane contains two hundred. A quad is useful in cases where a scene object must be used simply as a display screen for an image or movie. Simple GUI and information displays can be implemented with quads, as can particles, sprites
and “impostor” images that substitute for solid objects viewed at a distance.
Ok.. confirmed using a Quad gave me what I expected in scale.. I now understand that the underlying plane mesh is actually 10 x 10 in size.
https://forum.unity.com/threads/really-dumb-question-scale-of-plane-compared-to-cube.33835/#:~:text=aNTeNNa%20trEE%20said%3A-,The%20plane%20is%20a%2010x10%20unit%20mesh.,a%20quick%20floor%20or%20wall.
Related
I have thoses two meshes:
In my game, I put the hat on the hair at runtime:
As you can see, as expected, the hair is visible outise the hat part.
How can I achieve this in Unity (what kind of mask shader should I use?):
I've tryed to make a depth mask but it hides every meshes in my scene. I just want to hide the hair, not others meshes.
And what if I have two player having the same case? Would player mask hide player 2 hair? How can I avoid that?
What I would do:
write a C# code that gets the pivot position (bottom part of the hat) and its up vector every frame.
build a plane with these values. The up vector would be the normal vector of the plane and a plane can be defined by a point and a normal vector.
I would pass the equation of the plane to the shader (via Material.SetFloat or Material.SetVector) and evaluate if the world positions of the hair vertices are in the correct or in the wrong side of the plane.
I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale.
Then how could I set the overall relative dimensions between the gameobjects?
For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius.
Also, consider Bounds.extents, that's relative to the objects's bounding box.
Again, considering the Sphere, you can obtain the diameter with:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Bounds bounds = mesh.bounds;
float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object.
The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size).
But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1.
You should try to learn some 3D modelling application to create arbitrary objects.
Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling.
Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number.
So typically 1 m = 1 unit.
All things that are 1 unit tall are 1 m tall.
If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall.
A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option.
Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also.
This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.
I am beating my head a little bit here for a while but I still could bot find a way to set up a matrix that projects my Unity game in a Tibianeske like manner:
Reading on tutorials on internet I could figure out how a normal orthographic perspective works, but tibia's one is kind of odd.
Digging over webs I found in here a guy (Clint Bellanger) who describes really well how to get the same perspective in blender's render according to him:
Start with a scene in 45 degree isometric. Video game style, where
the camera angle is Blender (60,0,45).
In Blender if you look at Buttons Window -> Scene -> Render Buttons ->
Format, you can set the render aspect ratio. Set AspY to half of
AspX. This is the same as taking regular rendered output and scaling
X by 50%. If you rendered a cube, the top of the cube will be a
perfect square (though at a 45 degree angle).
We can then use Blender nodes to rotate the result 45 degrees. The
output:
Note this started as a cube, so there's a lot of "vertical"
distortion. So you might have to scale meshes to 50% Z before using
this method. Also notice the Edge seems to be applied after the
Aspect, so the edge isn't distorted.
Blend file: http://clintbellanger.net/images/temp/UltimaVII.blend (I'm
a Nodes noob so there might be a smarter setup).
For kicks, here is that tower again. I pulled it into the above
workflow scene and scaled Z by 50%. Click "Re-render this layer" on
the first node to create the composite.
On his method, he used stuff like rescaling the render and changing the scale of models, Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Hope someone more experienced with perks of 3D maths could help me to figure it out. Thank you! =D
What you ask for is a simple parallel projection. The typical orthographic projection is just a special case where the projection rays are perpendicular to the image plane. However, every parallel projection can be represented by an affine shear transformation followed by a standard orthogonal projection.
Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Yes. Using default GL conventions here, all you have to do is to take the standard ortho matrix, post-multiply it by an appropriate shear matrix and use that as the projection matrix.
So, I am using SceneKit to render a collection of parametric surfaces (the sum of which make an object). To put these on screen I am creating custom geometries by sampling the points and creating triangles. Here is a quick over view of how I do it.
Loop through the collection of surfaces
Generate a random color C
For each surface calculate a grid of N x N points (both positions and normals)
Assign all vertexes for that surface the color C
Add groups of 3 vertexes from this surface to the face index list
And that seems to work. After I get all this data, I make it into the proper structures (SCNGeometrySource and SCNGeometryElement) and make a SCNGeometry like so
SCNGeometry(sources: [vertexSource, normalSource, colorSource], elements: [element])
This works and displays my surfaces on the screen fine as one single geometry element. My problem is, I have some really complicated objects that I am trying to work with and it is just running really slow to move the camera around when looking at the object. Rendering is taking around 500 ms. Which is making my frame rate and experience awful.
So the question is, what steps can I take to speed up SceneKit performance? I did this same project with WebGL using Three.js with the same amount of data and was able to use an orbiting camera fine, so I can't believe that scene kit couldn't at least compete with that. What features can I tweak and turn off to speed up performance? I am using the triangle primitive type, the allowsCameraControl = true for the orbiting camera, and metal for the SCNView.
For those curious, the model I am struggling on generated 231,900 vertices and 347,850 indices for faces (11.1312 MB of vertex data (position and normal) and 1.3914 MB of face data (essentially just index positions of vertexes in order for triangles.))
1) If you are "standing" on center of your generated surface, then your problem maybe that you drawing alot offscreen (no frustum culling) and you need to split your sufrface (single node) into subsurfaces (child nodes), so only nodes that is visible in camera view space is drawn.
That being said, 231,900 vertices is really not much, I draw several milions #60fps with SceneKit Metal renderer (+20% faster than using OpenGL renderer) on OSX.
2) If you are looking on your surfaces from distance and have bad performance, check what ammount of bytesPerComponent: you feeding when creating SCNGeometrySource. I experienced big performance drop when using CGFloat (double) instead of plain float on GeForce GTX (while okay on integrated Intel graphics).
I am having a problem transferring the position of some objects in still image (RGB image ) into 2D view of the room where the image had been taken.I have the coordinates of about 3 objects in the image (i mean X,y coordinate ) as well as the distance between them and I want to transfer the position of these 3 objects into the plan view .
Any help is much appreciated
You will probably need to clarify your question, but if I'm reading it the right way, it coult be as simple as taking the ratio from one object to another.
For example, if your sensor is 640px wide, and that covers a horizontal length of 10 meters, then you know that every 64 pixels represents one meter in the real world.
Bare in mind that this assumes the objects in the real world are at in the same plane, orthogonal to the lens vector. If objects are in different planes (depths), then you have a bigger problem in your hands.