How to fit a texture to a surface? ARKit SceneKit Swift - swift

I am very new to scenekit and 3d development in general and I'm playing around with ARKit and trying to fit a texture to a plane (well really a scnbox but only the top surface) but I'm seriously failing and also failing to find anything helpful on the web.
I have a texture of a road that is a very long rectangular png image. width:height ratio is about 20:1
I want to apply this texture to the surface of a table, once arkit has found the plane for me. I do not know the dimensions of the table before the app starts.
I can currently apply a texture to this plane, and also rotate the texture as desired.
What I would like to accomplish is to stretch the texture (keeping original ratio) so that the short sides of the plane and texture line up and then the texture continues until the end of the plane, cutting off or repeating depending on the length or ratio of the plane.
Here is the function that gets the ScnMaterial Object
class func getRunwayMaterial() -> SCNMaterial {
let name = "runway"
var mat = materials[name]
if let mat = mat {
return mat
}
mat = SCNMaterial()
mat!.lightingModel = SCNMaterial.LightingModel.physicallyBased
mat!.diffuse.contents = UIImage(named: "./Assets.scnassets/Materials/runway/runway.png")
mat!.diffuse.wrapS = SCNWrapMode.repeat
mat!.diffuse.wrapT = SCNWrapMode.repeat
materials[name] = mat
return mat!
}
This is the function that should be doing the scaling and rotating of the texture on the plane.
func setRunwayTextureScale(rotation: Float? = nil, material: SCNMaterial? = nil) {
let texture = material != nil ? material! : planeGeometry.materials[4]
var m: SCNMatrix4 = SCNMatrix4MakeScale(1, 1, 1)
if(rotation != nil){
textureRotation = rotation! + textureRotation
}
m = SCNMatrix4Rotate(m, textureRotation, 0, 1, 0)
texture.diffuse.contentsTransform = m
}
Please help me fill in the blanks here, and if anyone has any links or articles on how to do this kind of manipulation please link me!
Thanks!
Ethan
edit: btw I'm using xcode 9

Try using:
material.diffuse.wrapS = SCNWrapModeRepeat;
material.diffuse.wrapT = SCNWrapModeRepeat;
This would help the material not stretch, but simply keep adding more and more of the same png to itself.
You can also set the scale for the material by setting it to a width and height:
CGFloat width = self.planeGeometry.width;
CGFloat height = self.planeGeometry.length;
material.diffuse.contentsTransform = SCNMatrix4MakeScale(width, height, 1);
Sorry i'm working with Objective C here but should be pretty straightforward to translate this.
Also some good tutorials can be found on this link:
https://blog.markdaws.net/apple-arkit-by-example-ef1c8578fb59

Related

Where a shadow plane should be defined in Scenekit

It's so confusing to me, would be grateful if anyone help me on it.
I have a shadow plane to show the shadow below the AR object. I read some article that they define this shadow in viewDidLoadand add it as the child bode to sceneView.scene. The question is, it should be defined only once for the floor surface?
for instance, I can add the shadow plane to renderer(_:didAdd:for:), it call it once when a new surface is detected. That is so cool for me. But the position of the shadow plane should be changed as well? can someone explain it to me that where it should be defined and wehere/when it should be updated?
here how I define the shadow plane
private func addShadowPlane(node: SCNNode, planeAnchor: ARPlaneAnchor) {
let anchorX = planeAnchor.center.x
let anchorY: planeAnchor.center.y
let anchorZ = planeAnchor.center.z
let floor = SCNFloor()
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(anchorX, anchorY, anchorZ)
floor.length = CGFloat(planeAnchor.extent.z)
floor.width = CGFloat(planeAnchor.extent.x)
floor.reflectivity = 0
floor.materials = [shadowMaterialStandard()]
node.addChildNode(floorNode)
}
func shadowMaterialStandard() -> SCNMaterial {
let material = SCNMaterial()
material.lightingModel = .physicallyBased
material.writesToDepthBuffer = true
material.readsFromDepthBuffer = true
material.colorBufferWriteMask = []
return material
}
The issue you might run into is: Do you want one single shadow plane in a kind of initial defined position and then remains there (or can be repositioned). Or do you want a lots of shadow planes, like on any surface captured with the ARKit? The problem might be, that all those planes will not be exact and accurate to the surfaces on top they are created (just more or less). You can make more accurate shapes for surfaces, but they are built up in an ongoing process and need more time to complete (imagine you scan a table by walking around). I also did some ARApps with Shadow planes. I usually create one single shadow plane (like 20x20 meters) on my request using a focus square. I fetch the worldPosition from the focus square, then I add a plane to that location using Scenekit (and not the Renderer for plane anchors). Keep in mind, there are many ways to do this. There is no best way.
Try to study this Apple Sample App for more information on placing objects, casting shadows etc:
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction

Crop Image texture on WorldSpace Canvas using RectTransform on Overlay Canvas

I've been trying for several days to crop an Image texture (Board - originalImage in the code example) on WorldSpace Canvas using RectTransform(CropArea - cropArea in the code example) on Overlay Canvas.
The problem is that I can't find the correct coordinates of the cropArea on the original image.
I've tried with this:
Texture2D croppedTexture = new Texture2D((int)cropArea.rectTransform.rect.width, (int)cropArea.rectTransform.rect.height);
Texture2D originalTexture = (Texture2D) originalImage.mainTexture;
croppedTexture.SetPixels(originalTexture.GetPixels((int)cropArea.rectTransform.anchoredPosition.x, (int)cropArea.rectTransform.anchoredPosition.y, (int)cropArea.rectTransform.rect.width, (int)cropArea.rectTransform.rect.height));
croppedTexture.Apply();
resultImage.texture = croppedTexture;
But the result image is not cropped properly. It is a bit to the left and a bit down.
Does anybody have an idea how can I achieve this?
I found I have to consider so many variables. Here is a simplified version.
Need a new field: worldCanvas
var cropRectTrans = cropArea.rectTransform;
var origRectTrans = originalImage.rectTransform;
var origRectSize = origRectTrans.sizeDelta;
var pivot = origRectTrans.pivot;
Texture2D originalTexture = (Texture2D)originalImage.mainTexture;
// Scale pivot to pixel unit.
pivot.Scale(origRectSize);
// Get corners of the overlay rectangle in world space.
// The canvas is "Screen Space Overlay", so these positions are
// also the screen positions.
var cropCorners = new Vector3[4];
cropRectTrans.GetWorldCorners(cropCorners);
// Transform the left-bottom and right-top corners to the space
// of the original image. The translated position needs to be added
// with the scaled pivot, so that we can obtain the coordinates
// relative to the left-bottom corner of the image.
var cam = worldCanvas.worldCamera;
RectTransformUtility.ScreenPointToLocalPointInRectangle(
origRectTrans, cropCorners[0], cam, out Vector2 lb);
RectTransformUtility.ScreenPointToLocalPointInRectangle(
origRectTrans, cropCorners[2], cam, out Vector2 tr);
var point = lb + pivot;
var size = tr - lb;
// Scale the position and size if the image is scaled.
var scale = new Vector2(
originalTexture.width / origRectSize.x,
originalTexture.height / origRectSize.y
);
point.Scale(scale);
size.Scale(scale);
// Finally we get the correct position and size in the original image space.
Texture2D croppedTexture = new Texture2D((int)size.x, (int)size.y);
croppedTexture.SetPixels(originalTexture.GetPixels(
(int)point.x, (int)point.y, (int)size.x, (int)size.y));
croppedTexture.Apply();
resultImage.texture = croppedTexture;

Creating a custom geometry with polygon triangulation and applying texture coordinates in SceneKit

I'm trying to create a custom geometry object in SceneKit, which should be a plane with an arbitrary shape. I'm supplying the outlining vertices of the shape, and want to fill up the inside of it.
So far I have been using this code:
extension SCNGeometry {
static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry {
var indices: [Int32] = [Int32(vertices.count)]
var index: Int32 = 0
for _ in vertices {
indices.append(index)
index += 1
}
let vertexSource = SCNGeometrySource(vertices: vertices)
let textureCoords : [CGPoint] = [] // Fix to map textures to the polygon plane...
let textureCoordsSource = SCNGeometrySource(textureCoordinates: textureCoords)
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout<Int32>.size)
let geometry = SCNGeometry(sources: [vertexSource, textureCoordsSource], elements: [element])
let imageMaterial = SCNMaterial()
imageMaterial.diffuse.contents = UIImage(named: "texture.jpg")
let scaleX = (Float(1)).rounded()
let scaleY = (Float(1)).rounded()
imageMaterial.diffuse.contentsTransform = SCNMatrix4MakeScale(scaleX, scaleY, 0)
imageMaterial.isDoubleSided = true
geometry.firstMaterial = imageMaterial
return geometry
}
}
This works reasonably well when making more simple polygon shapes, but does not work as intended when the shape becomes more complex and narrow in different places. I also don't know of any way to create texture coordinates in order to apply a custom texture with this approach.
I think I need to utilize some kind of polygon triangulation algorithm in order to break the shape into triangles, and then use the correct SCNGeometryPrimitiveType such as .triangles or .triangleStrip. This could probably also allow me to do a UV-mapping for the texture coordinates, however I'm not sure how that would work as of right now.
The polygon triangulation algorithm would need to be able to handle 3D coordinates, as the created 2D geometry should exist in a 3D world (you should be able to create tilted polygon planes etc.). I have not been able to find any 3D polygon triangulation algorithms already implemented in Swift yet.
To be clear on the texture coordinates; the texture that would be used is a repeating texture such as this one:
For complex cases SCNShape is more suited as it uses a more elaborate triangulation (Delaunay).
A simple SCNGeometryElement of type SCNGeometryPrimitiveTypePolygon will generate a triangle fan.

Is it possible to define a 3d model/prefab size in unity?

Is there a way I can define a 3d model size in unity? Like height = 1, width = 3, depth = 3?
I want the model to take a defined space in unity's scene, no matter how big or small I make the fbx in Blender. So I can't use scale as changing the model size in Blender will break this scaling.
I need it to be a square 3 wide, 3 long and 1 high, not depending on the model's size that is exported from Blender. Is it possible?
The same question but from another angle - how to set model size in unity? There is only the scale setting, but no size setting. This looks weird.
So far I have found a workaround like getting object's rendered bounds and adjusting scaling quotient in a script, but this doesn't seem right to me.
You can use the Mesh.bounds to get the 3D model size without applied scaling.
Then you recalculate the scale according to your needs e.g.
// The desired scales
// x = width
// y = height
// z = depth
var targetScale = new Vector3(3, 1, 3);
var meshFilter = GetComponent<MeshFilter>();
var mesh = meshFilter.mesh;
// This would be equal to the 3D bounds if scale was 1,1,1
var meshBounds = mesh.bounds.size;
// This would make the model have world scale 1,1,1
var invertMeshBounds = new Vector3(1/meshBounds.x, 1/meshBounds.y, 1/meshBounds.z);
// Use this if you want exactly the scale 3,1,3 but maybe stretching the fbx
var finalScale = Vector3.Scale(invertMeshBounds, targetScale);
As I understand you want to keep the correct relative scales of the 3D model but make it fit into the defined targetScale so I would use the smallest of the 3 values as scaling factor
var minFactor = Mathf.Min(finalScale.x, finalScale.y);
minFactor = Mathf.Min(minFactor, finalScale.z);
transform.localScale = Vector3.one * minFactor;

how to avoid resizing texture in swift

I am adding the spritenode to the scene, the size is given.
But when I change the texture of the spritenode, the size automatically changes to the original size of the image(png) of the texture.
How can I avoid this?
My code:
var bomba = SKSpriteNode(imageNamed: "bomba2")
var actionbomba = SKAction()
bomba.size = CGSizeMake(frame2.size.width/18, frame2.size.width/18)
let bomba3 = SKTexture(imageNamed: "bomba3.png")
actionbomba.addObject(SKAction.moveBy(CGVectorMake(0, frame.size.height/2.65), duration: beweegsnelheid))
actionbomba.addObject(SKAction.setTexture(bomba3,resize: false))
addChild(bomba)
bomba.runAction(SKAction.repeatAction(SKAction.sequence(actionbomba), count: -1))
Do not set the size explicitly. From your information sprite kit will not automatically find the scale factor for each texture and scale it.
Instead, you set the scale factor of the node and each texture will have that scale applied to it.
[playernode setScale: x];
Something like this. You only have to set it when you create the node and each texture will be the size you would expect, given that your textures are the same size.
I use this method for all of my nodes that are animated with multiple textures and it works every time.