render of low poly model tris showing very hard/marked in three.js compared to sketchfab/unit3d/Iray - unity3d

I've edited this post with a clean edge flow model and maps you can access if that helps in getting feedback. I can replicate the hard marked edges issue for this case too:
I'm finding the rendering result in three.js shows very hard marked polygons of the low poly object, I'm comparing this to sketchfab , unity3d and Iray render results.
Here's a snapshot of the edge flow shown in maya : https://drive.google.com/open?id=1qNA4VoZf-rSyq3_MQdeZqdFC6BxsE3un
Here's what the model looks in maya's view panel (not subdivided): https://drive.google.com/open?id=1US-fv5-v2ygReqjRPgcsQSusrAXTxVG5
Here's a snapshot of the three.js render (marked in red box more noticeable)
https://drive.google.com/open?id=1K3CIBLvA7skVUPWL0qInLcFrK74DtriK
here sketchfabs without shadows/post-processing filters
https://drive.google.com/open?id=1rozZyBSU1HwPPk4EnKFyc7SVvFNXQBwz
here Iray render in substance painter:
https://drive.google.com/open?id=1cXJzw780-kWH0nANy5ekM0HjRKAdaVQ2
Here's Unity render: https://drive.google.com/open?id=1lLFLd8UT48OSvxJcp7arwygZZISsaHkS
Here is the fbx if you'd need to inspect mesh / edge flow: https://drive.google.com/open?id=1BwljZNKL3dWJSSca6WYlqSK7os1Hp4pT
I'm also adding the normal map as I thought the problem may relate to my three.js setup for this(?): https://drive.google.com/open?id=149r3n9JGnb9xEJkf9Eh7ELK2bM83bJX_
albedo map: https://drive.google.com/open?id=1rGgDUOKbbeE6mrAlTG_6C7b8LgqQ1DF0
I'm reusing envmap hdr example and hdr setting.
Can someone please share some thoughts on what I can try differently?
Thank you for your help, Sergio.
I tried the following:
I softened edges in maya.
I also tried the lines below separately and combined but there was not result.
//vaseMesh.geometry.mergeVertices(); and //vaseMesh.geometry.computeVertexNormals();
normalScale appears to be best at material.normalScale.x = -1;
I also tried but had same result without hdr or tonemapping settings as per displacement three.js example https://threejs.org/examples/?q=displ#webgl_materials_displacementmap
renderer = new THREE.WebGLRenderer();
renderer.toneMapping = THREE.LinearToneMapping;
//load vase material textures once loaded
manager.onLoad=function () {
material = new THREE.MeshStandardMaterial( {
color: 0xffffff,
roughness: params.roughness,
metalness: params.metalness,
map: albedoM,
normalMap: normalMap,
normalScale: new THREE.Vector2( 1, -1 ),
aoMap: aoMap,
aoMapIntensity: 1,
flatShading: true,
side: THREE.DoubleSide
} );
var myObjectLoader = new THREE.FBXLoader( );
myObjectLoader.load( "Piece1.fbx", function ( group ) {
console.log("On object loading");
var geometry = group.children[ 0 ].geometry;
geometry.attributes.uv2 = geometry.attributes.uv;
geometry.center();
vaseMesh = new THREE.Mesh( geometry, material );
vaseMesh.material=material;
//vaseMesh.geometry.mergeVertices();
//vaseMesh.geometry.computeVertexNormals();
material.normalScale.x = -1;
scene.add( vaseMesh );
console.log("Finished adding to scene");
vaseMesh.position.set(0,0,0);
animate();
} );
}
var textureLoader = new THREE.TextureLoader(manager);
var albedoM = textureLoader.load( "vaseTextures/albedo.png");
var normalMap = textureLoader.load( "vaseTextures/normal.png");
var aoMap = textureLoader.load( "vaseTextures/ao.png");

Giving credit to #Mugen87 for the answer, removing the setting flatShading to true did it!
https://discourse.threejs.org/t/render-of-low-poly-model-tris-showing-very-hard-marked-in-three-js-compared-to-sketchfab-unit3d-iray/6829/2?u=mugen87
Cheers, Sergio

Related

How can I get the real Texture/Sprites reference in OnPostprocessSprites in Unity?

My goal was to create an import script (AssetPostprocessor), that handles a few things for me whenever I drag a spritesheet (a picture that contains multiple frames for an animation) into a specific Unity folder.
I want the script to split the picture up into multiple frames (similar to when its done manually, with Sprite Mode = Multiple), and then create an animation out of it. I parse the instructions (name, sprite frame size, frame holds) from the file name that gets handled.
Example: "Player_Walk_200x100_h3-3-3-3-3.png"
So far, I managed to accomplish all of those points, but whenever I create an animation and link the given Sprites to it, the resulting animation is "empty". As far as I could figure out, that's because the Texture2D and Sprite[] given to OnPostprocessSprites() seems to be temporary. The object exists during creation, but seemingly gets discarded later on. How can I solve this? How can I grab a reference to Sprite[] that doesn't get dropped?
Here is a stripped down version of my current core:
void OnPostprocessTexture( Texture2D texture ) {
// the texture is split into frames here, code not included since this part works fine and would complicate things.
}
void OnPostprocessSprites( Texture2D texture, Sprite[] sprites ) {
if( !this.assetPath.Contains( 'spritetest' ) ) return;
Debug.Log( "Number of Sprites: " + sprites.Length ); // shows the correct number of sprites
if( sprites.Length == 0 ) {
AssetDatabase.ImportAsset( this.assetImporter.assetPath );
return;
}
int frameRate = 24;
AnimationClip clip = new AnimationClip();
clip.frameRate = frameRate;
EditorCurveBinding spriteBinding = new EditorCurveBinding();
spriteBinding.type = typeof( SpriteRenderer );
spriteBinding.path = "";
spriteBinding.propertyName = "m_Sprite";
ObjectReferenceKeyframe[] spriteKeyFrames = new ObjectReferenceKeyframe[ sprites.Length ];
for( int i = 0; i < sprites.Length; i++ ) {
spriteKeyFrames[ i ] = new ObjectReferenceKeyframe();
spriteKeyFrames[ i ].time = i;
spriteKeyFrames[ i ].value = sprites[ i ]; // these sprites are empty in the editor
}
AnimationUtility.SetObjectReferenceCurve( clip, spriteBinding, spriteKeyFrames );
AssetDatabase.CreateAsset( clip, "Assets/Assets/spritetest/Test.anim" );
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
I've looked up forum posts, documentations, and even questions here (like Unity Editor Pipeline Scripting: how to Pass Sprite to SpriteRenderer in OnPostprocessSprites() Event? ) but none really managed to resolve this problem: whenever I try to grab the "real" sprite or texture via AssetDatabase, the resulting object is always just null.
For example, if I try doing this:
// attempt 1
Sprite sprites = AssetDatabase.LoadAssetAtPath<Sprite>( this.assetPath );
Debug.Log( "Real Sprite: " + sprite ); // sprite is null
// attempt 2
Object[] objects = AssetDatabase.LoadAllAssetRepresentationsAtPath( this.assetPath );
Debug.Log( "Real Objects: " + objects.Length ); // is always 0
Calling AssetDatabase.SaveAssets() or AssetDatabase.Refresh() beforehand doesn't change the result. I find plenty of people with similar issues, but no code or examples that seem to really resolve this issue.
Here is another person with a similar issue: ( source: https://answers.unity.com/questions/1080430/create-animation-clip-from-sprites-programmaticall.html )
The given link didn't resolve my problem either.
Essentially: How can I create an animation from an imported asset, by using my generated sprites, in AssetPostprocessor?

MapboxGL Render Function Issue

I'm using mapboxgl and I'm also using ThreeJS be able to import 3D model to the scene. The 3D model that I used has very high polygon count. Due to MapboxGl's render function triggering in each frame my browser is being very laggy. Is it possible to trigger the render function only once or which function must use at this point istead of render function ? I would like to render my 3D model only once on the map.
Here is my codes:
mapBoxGLSetup: function () {
mapboxgl.accessToken = "";
oOriginPoint = [29.400261610397465, 40.87692013157027, 1];
oMap = new mapboxgl.Map({
logoPosition: "bottom-right",
container: oSceneContainer.id,
style: 'mapbox://styles/mapbox/streets-v11',
center: oOriginPoint,
zoom: 15,
pitch: 0,
antialias: true
});
var modelOrigin = oOriginPoint;
var modelAltitude = 0;
var modelRotate = [Math.PI / 2, Math.PI / 6.5, 0];
var modelAsMercatorCoordinate = mapboxgl.MercatorCoordinate.fromLngLat(
modelOrigin,
modelAltitude
);
o3DModelTransform = {
translateX: modelAsMercatorCoordinate.x,
translateY: modelAsMercatorCoordinate.y,
translateZ: modelAsMercatorCoordinate.z,
rotateX: modelRotate[0],
rotateY: modelRotate[1],
rotateZ: modelRotate[2],
scale: (modelAsMercatorCoordinate.meterInMercatorCoordinateUnits() / 1000) * 0.85
};
},
oSceneMapSetup: function () {
oMap.on('style.load', function () {
oMap.addLayer({
id: 'custom_layer',
type: 'custom',
renderingMode: '3d',
onAdd: function (oMapElement, oGlElement) {
base.oMapElement = oMapElement;
base.setupRenderer(oMapElement, oGlElement);
base.setupLayout(); // I'm loading 3D model in this function
base.setupRayCaster();
},
render: function (gl, matrix) {
// This render function is triggering each frame
var rotationX = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(1, 0, 0), o3DModelTransform.rotateX);
var rotationY = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(0, 1, 0), o3DModelTransform.rotateY);
var rotationZ = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(0, 0, 1), o3DModelTransform.rotateZ);
var oMatrix = new THREE.Matrix4().fromArray(matrix);
var oTranslation = new THREE.Matrix4().makeTranslation(o3DModelTransform.translateX, o3DModelTransform.translateY, o3DModelTransform.translateZ)
.scale(new THREE.Vector3(o3DModelTransform.scale, -o3DModelTransform.scale, o3DModelTransform.scale))
.multiply(rotationX)
.multiply(rotationY)
.multiply(rotationZ);
oCamera.projectionMatrix = oMatrix.multiply(oTranslation);
oRenderer.resetState();
oRenderer.render(oScene, oCamera);
base.oMapElement.triggerRepaint();
}
})
});
},
Thanks for your help and support.
As long as you still calling triggerRepaint on each layer render loop, you will repaint the full map, it’s inherent to the way CustomLayerInterface and update layer work in Mapbox.
When I did my first research on the TriggerRepaint topic, I found a quite old issue in Mapbox where a guy tested all the different options, including having a fully separated context and even 2 mapbox instances, one of them empty. Here is the link
The performance was obviously better in terms of FPS/memory, but there were other collaterals that I personally wouldn't assume for threebox, like losing the depth calculation between mapbox fill-extrusions and 3D custom layer.
Sharing context
Different contexts & canvas
The second issue is the delay between the movement of both cameras. While current sharing context ensures the objects are fixed and stuck to a coords set, creating different contexts will produce a soft dragging effect where the delay between the 2 contexts render can be visually perceived when the map moves first and the 3D objects follow. It's perceivable even with ne single cube, so with thousands of objects will be definitely clearer.

In Away3d particles system, adding a particleFollowNode to the particleAnimationSet make BillboardNode did not work

I've made a snow effect with particles system in away3d, but when I added a particleFollowNode to the ParticleAnimationSet,then the billboardNode did not work. Anybody knows how to fix this bug? Thanks for your help.
ActionScript:
//setup the particle animation set
_particleAnimationSet = new ParticleAnimationSet(true, true);
_particleAnimationSet.addAnimation(new ParticleVelocityNode(ParticlePropertiesMode.LOCAL_STATIC));
_particleAnimationSet.addAnimation(new ParticlePositionNode(ParticlePropertiesMode.LOCAL_STATIC));
particleFollowNode=new ParticleFollowNode(true,true);
//add particleFollowNode for moving particles around
_particleAnimationSet.addAnimation(particleFollowNode);
//then BillBoad stopped working...
_particleAnimationSet.addAnimation(new ParticleBillboardNode());
_particleAnimationSet.initParticleFunc = initFunc;
I've fixed the bug. the particle mesh's bounds doesn't update properly which cause the mesh culling step cut it off while the mesh's fake bounds isn't in camera's frustum .
How to fix:
1. Set particle mesh's id as "Particles"
2. In MeshNode.cs file, function acceptTraverser, modify as :
override public function acceptTraverser(traverser:PartitionTraverser):void
{
//trace(_mesh.id);
if (traverser.enterNode(this)||_mesh.id=="Particles")
{
super.acceptTraverser(traverser);
var subs:Vector.<SubMesh> = _mesh.subMeshes;
var i:uint;
var len:uint = subs.length;
while (i < len)
traverser.applyRenderable(subs[i++]);
}
}

How to create a Away3d Infinite tiling floor?

I am trying to create a scene with a infinite floor that seems to fade away in the distance in Away3d. I want the floor to have a texture. Prob is -- I can't seem to find any clear examples or tutorials that demonstrate this.
Ok you need to set your scene up, import local libs etc here we go
//Away3d
import away3d.containers.Scene3D;
import away3d.containers.View3D;
//etc
////////3D ModelScenes, Textures CLASS Exported 3DS/////////////////////
[Embed("assets/Images/grass1.jpg")]
var GrassTexture:Class;
var groundMaterial = new BitmapTexture(new GrassTexture().bitmapData);
////////GROUND MESH/////////////////////////////////////////////////////
var plane = new Mesh(new PlaneGeometry(3000,3000,30,30),new TextureMaterial(Cast.bitmapTexture(groundMaterial)));
plane.geometry.scaleUV(25, 25);
plane.material.repeat = true;
plane.material.alpha = 1;
container.addChild(plane);
Instead of tiling mesh/planes your better off having a really big plane and using the vertexes/polygons as tile locations...
Hope it helps

three.js merging geometry with ShaderMaterials

I have a project built on a tileset, which I currently map to CubeGeometries via a number of ShaderMaterials.
When the cubes are rendered, there is bleeding and flickering around the edges of the cubes. Also, it seems to be an awfully bad way to do it, performance-wise.
So I looked up THREE.GeometryUtils.merge that apparently merges my cubes to one geometry, vertices and all.
Is it possible to make the merged mesh keep the materials I used on each of the cubes?
Is there a better way to accomplish what I'm trying to do?
Edit:
This is an example of what is not working.
http://jsfiddle.net/CpQ77/3/
var shaderMat1 = new THREE.ShaderMaterial({
fragmentShader: document.getElementById("red-fragment").innerText,
vertexShader: document.getElementById("vertex").innerText
});
var shaderMat2 = new THREE.ShaderMaterial({
fragmentShader: document.getElementById("blue-fragment").innerText,
vertexShader: document.getElementById("vertex").innerText
});
var cube1 = new THREE.Mesh(new THREE.CubeGeometry(64, 64, 64), new THREE.MeshFaceMaterial([shaderMat1, shaderMat1, shaderMat1, shaderMat1, shaderMat1, shaderMat1]));
cube1.position.x = 0;
cube1.position.y = 300;
var cube2 = new THREE.Mesh(new THREE.CubeGeometry(64, 64, 64), new THREE.MeshFaceMaterial([shaderMat2, shaderMat2, shaderMat2, shaderMat2, shaderMat2, shaderMat2]));
cube2.position.x = 64;
cube2.position.y = 300;
var geo = new THREE.Geometry();
THREE.GeometryUtils.merge(geo, cube1);
THREE.GeometryUtils.merge(geo, cube2);
var mergedMesh = new THREE.Mesh(geo, new THREE.MeshFaceMaterial());
scene.add(mergedMesh);
It gives an error saying, "Uncaught TypeError: Cannot read property 'map' of undefined", when trying to use the MeshFaceMaterial as used in a couple of places around the web.
I can't figure out what I'm missing though.
Edit2:
One workaround I found was to loop through all the faces of the new geometry, and applying a materialIndex to it before calling geometry.mergeVertices().
Thanks for this post, the comments were helpful in finding a solution. Instead of supplying the materials array to the Geometry, you should supply it as the only argument to MeshFaceMaterial.
Example in CoffeeScript:
materials = []
for i in [0...6]
texture = window["texture_" + i] # This is a Texture that has already been loaded
materials.push new THREE.MeshBasicMaterial(
color : color
map : texture
)
size = 1
geometry = new THREE.CubeGeometry size, size, size
cube = new THREE.Mesh geometry, new THREE.MeshFaceMaterial materials
cube.position.x = x
cube.position.y = y
cube.position.z = z
scene.add cube
return cube