Is it possible to define a 3d model/prefab size in unity? - unity3d

Is there a way I can define a 3d model size in unity? Like height = 1, width = 3, depth = 3?
I want the model to take a defined space in unity's scene, no matter how big or small I make the fbx in Blender. So I can't use scale as changing the model size in Blender will break this scaling.
I need it to be a square 3 wide, 3 long and 1 high, not depending on the model's size that is exported from Blender. Is it possible?
The same question but from another angle - how to set model size in unity? There is only the scale setting, but no size setting. This looks weird.
So far I have found a workaround like getting object's rendered bounds and adjusting scaling quotient in a script, but this doesn't seem right to me.

You can use the Mesh.bounds to get the 3D model size without applied scaling.
Then you recalculate the scale according to your needs e.g.
// The desired scales
// x = width
// y = height
// z = depth
var targetScale = new Vector3(3, 1, 3);
var meshFilter = GetComponent<MeshFilter>();
var mesh = meshFilter.mesh;
// This would be equal to the 3D bounds if scale was 1,1,1
var meshBounds = mesh.bounds.size;
// This would make the model have world scale 1,1,1
var invertMeshBounds = new Vector3(1/meshBounds.x, 1/meshBounds.y, 1/meshBounds.z);
// Use this if you want exactly the scale 3,1,3 but maybe stretching the fbx
var finalScale = Vector3.Scale(invertMeshBounds, targetScale);
As I understand you want to keep the correct relative scales of the 3D model but make it fit into the defined targetScale so I would use the smallest of the 3 values as scaling factor
var minFactor = Mathf.Min(finalScale.x, finalScale.y);
minFactor = Mathf.Min(minFactor, finalScale.z);
transform.localScale = Vector3.one * minFactor;

Related

Math doesn't work for the size and position of Game Pieces WRT the size of their container "room"

The positions and sizes of my Game Pieces, as set by CGPoint(..) and CGRect(..), don’t make arithmetic sense to me when looked at with respect to the width and height of the surrounding container of all Game Pieces?
Let me illustrate with just one specific example –
I call the surrounding container = “room”.
One of many specific Game Pieces = “rock”.
Here’s the math
roomWidth = Double(UIScreen.main.bounds.width)
roomHeight = Double(UIScreen.main.bounds.height)
While in Portrait mode:
roomWidth = 744.0
roomHeight = 1133.0
When rotated to Landscape mode:
roomWidth = 1133.0,
roomHeight = 744.0
So far so good .. here’s the problem:
When I look at my .sks file, the width of the “rock” and its adjacent game pieces far exceeds the roomWidth; for example,
Widths of rock + paddle + tree = 507 + 768 + 998 which obviously exceeds the room’s width for either Portrait or Landscape mode – and this math doesn’t even address the separation between Game Pieces.
The final math “craziness” looks at the swift xPos values for each Game Piece as specified in my .sks file:
Room: xPos = 40,
Rock: xPos = -390,
Paddle: xPos = -259,
Tree: xPos = 224
I cannot grasp the two high negative numbers .. to me, that means the Rock and the Paddle shouldn’t even be visible .. seriously off-screen.
One significant addition = I did set the Autoresizing Mask to center horizontally and vertically
I need a serious infusion of “smarts” here.
The default anchorPoint of an sks file (SpriteKit Scene file) is (0.5, 0.5). So the origin (0, 0) of the scene is drawn at the center of the SKView. You can change the anchor point in the Attributes inspector when editing the sks file. The default means that negative coordinates not too far from the origin will be visible in the SKView.
The scene also has a scaleMode property which determines how the scene is scaled if its size doesn't match the SKView's size. The default is .fill, which means the view scales the scene's axes independently so the scene's size exactly fills the view.

Leaflet.js (or other solution) zoom to magnified pixels without blur

I've been using Leaflet to display raster images lately.
What I would like to do for a particular project is be able to zoom in to an image so the pixels become magnified on the screen in a sharply delineated way, such as you would get when zooming in to an image in Photoshop or the like. I would also like to retain, at some zoom level before maximum, a 1:1 correspondence between image pixel and screen pixel.
I tried going beyond maxNativeZoom as described here and here, which works but the interpolation results in pixel blurring.
I thought of an alternative which is to make the source image much larger using 'nearest neighbour' interpolation to expand each pixel into a larger square: when zoomed to maxNativeZoom the squares then look like sharply magnified pixels even though they aren't.
Problems with this are:
image size and tile count get out of hand quickly (original image is 4096 x 4096)
you never get the 'pop' of a 1:1 correspondence between image pixel and screen pixel
I have thought about using two tile sets: the first from the original image up to it's maxNativeZoom, and then the larger 'nearest neighbour' interpolated image past that, following something like this.
But, this is more complex, doesn't avoid the problem of large tile count, and just seems inelegant.
So:
Can Leaflet do what I need it to and if so how?
If not can you point me in the right direction to something that can (for example, it would be interesting to know how this is achieved)?
Many thanks
One approach is to leverage the image-rendering CSS property. This can hint the browser to use nearest-neighbour interpolation on <img> elements, such as Leaflet map tiles.
e.g.:
img.leaflet-tile {
image-rendering: pixelated;
}
See a working demo. Beware of incomplete browser support.
A more complicated approach (but one that works across more browsers) is to leverage WebGL; in particular Leaflet.TileLayer.GL.
This involves some internal changes to Leaflet.TileLayer.GL to support a per-tile uniform, most critically setting the uniform value to the tile coordinate in each tile render...
gl.uniform3f(this._uTileCoordsPosition, coords.x, coords.y, coords.z);
...having a L.TileLayer that "displays" a non-overzoomed tile for overzoomed tile coordinates (instead of just skipping the non-existent tiles)...
var hackishTilelayer = new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
'attribution': 'Map data © OpenStreetMap contributors',
maxNonPixelatedZoom: 3
});
hackishTilelayer.getTileUrl = function(coords) {
if (coords.z > this.options.maxNonPixelatedZoom) {
return this.getTileUrl({
x: Math.floor(coords.x / 2),
y: Math.floor(coords.y / 2),
z: coords.z - 1
});
}
// Skip L.TileLayer.prototype.getTileUrl.call(this, coords), instead
// apply the URL template directly to avoid maxNativeZoom shenanigans
var data = {
r: L.Browser.retina ? '#2x' : '',
s: this._getSubdomain(coords),
x: coords.x,
y: coords.y,
z: coords.z // *not* this._getZoomForUrl() !
};
var url = L.Util.template(this._url, L.Util.extend(data, this.options));
return url;
}
... plus a fragment shader that rounds down texel coordinates prior to texel fetches (plus a tile-coordinate-modulo-dependant offset), to actually perform the nearest-neighbour oversampling...
var fragmentShader = `
highp float factor = max(1., pow(2., uTileCoords.z - uPixelatedZoomLevel));
vec2 subtileOffset = mod(uTileCoords.xy, factor);
void main(void) {
vec2 texelCoord = floor(vTextureCoords.st * uTileSize / factor ) / uTileSize;
texelCoord.xy += subtileOffset / factor;
vec4 texelColour = texture2D(uTexture0, texelCoord);
// This would output the image colours "as is"
gl_FragColor = texelColour;
}
`;
...all tied together in an instance of L.TileLayer.GL (which syncs some numbers for the uniforms around):
var pixelated = L.tileLayer.gl({
fragmentShader: fragmentShader,
tileLayers: [hackishTilelayer],
uniforms: {
// The shader will need the zoom level as a uniform...
uPixelatedZoomLevel: hackishTilelayer.options.maxNonPixelatedZoom,
// ...as well as the tile size in pixels.
uTileSize: [hackishTilelayer.getTileSize().x, hackishTilelayer.getTileSize().y]
}
}).addTo(map);
You can see everything working together in this demo.

Best way to use Farseer/Box2D's DebugDraw in Unity3D?

Box2D/Farseer 2D physics has a useful component which draws a simple representation of the physics world using primitives (lines, polygons, fills, colors). Here's an example:
What's the best way to accomplish this in Unity3D? Is there a simple way to render polygons with fill, lines, points, etc.? If so, I could implement the interface of DebugDraw with Unity's API, but I'm having trouble finding how to implement primitive rendering like this with Unity.
I understand it'll be in 3D space, but I'll just zero-out one axis and use it basically as 2D.
In case you mean actually a debug box just displayed in the SceneView not in the GameView you can use Gizmos.DrawWireCube
void OnDrawGizmos()
{
//store original gizmo color
var color = Gizmos.color;
// store original matrix
var matrix = Gizmos.matrix;
// set gizmo to local space
Gizmos.matrix = transform.localToWorldMatrix;
// Draw a yellow cube at the transform position
Gizmos.color = Color.yellow;
// here set the scale e.g. for a "almost" 2d box simply use a very small z value
Gizmos.DrawWireCube(transform.position, new Vector3(0.5f, 0.2f, 0.001f));
// restor matrix
Gizmos.matrix = matrix;
// restore color
Gizmos.color = color;
}
you can use OnDrawGizmosSelected to show the Gizmo only if the GameObject is selected
you could also extend this by getting the box size over the inspector
[SerializeField] private Vector3 _boxScale;
and using
Gizmos.DrawWireCube(transform.position, _boxScale);

ARKit node disappear after 100m

I'm currently working on ARKit (SceneKit) app. I've noticed that if I put a node at 100m, the node will show just fine but if I set it to 101m or farther, it won't show.
Is this the distance limit?
var translation = matrix_identity_float4x4
translation.columns.3.x = 1
translation.columns.3.y = 1
translation.columns.3.z = -100
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(name: "test", transform: transform)
sceneView.session.add(anchor: anchor)
Is there any way to increase this range?
For increasing a Camera's range use Far attribute in Z Clipping area of Attributes Inspector.
The default value is 100 meters.
var zFar: Double { get set }
Excerpt from Developer Documentation: The far value determines the maximal distance between the camera and a visible surface. If a surface is farther from the camera than this distance, the surface is clipped and does not appear. The default far value is 100.0.
let camera = SCNCamera()
camera.zFar = 1000
This post provides an important info.
Looks like there is no way to update the Z maximum range for SpriteKit. Only SceneKit allows you to modify this by updating the zfar property from the camera. Thanks to Gigantic for your help!

how to avoid resizing texture in swift

I am adding the spritenode to the scene, the size is given.
But when I change the texture of the spritenode, the size automatically changes to the original size of the image(png) of the texture.
How can I avoid this?
My code:
var bomba = SKSpriteNode(imageNamed: "bomba2")
var actionbomba = SKAction()
bomba.size = CGSizeMake(frame2.size.width/18, frame2.size.width/18)
let bomba3 = SKTexture(imageNamed: "bomba3.png")
actionbomba.addObject(SKAction.moveBy(CGVectorMake(0, frame.size.height/2.65), duration: beweegsnelheid))
actionbomba.addObject(SKAction.setTexture(bomba3,resize: false))
addChild(bomba)
bomba.runAction(SKAction.repeatAction(SKAction.sequence(actionbomba), count: -1))
Do not set the size explicitly. From your information sprite kit will not automatically find the scale factor for each texture and scale it.
Instead, you set the scale factor of the node and each texture will have that scale applied to it.
[playernode setScale: x];
Something like this. You only have to set it when you create the node and each texture will be the size you would expect, given that your textures are the same size.
I use this method for all of my nodes that are animated with multiple textures and it works every time.