How to rotate a bone to a specific position programmatically? - unity3d

Suppose I have a human skeleton standing upright in the following position:
Say I want to modify the 3d model, so that it is raising one hand (everything else the same):
I couldn't find a picture of a skeleton standing upright, so I posted the above picture, but you can imagine if the skeleton was standing exactly how it was in the first picture, with the only difference being it had rotated its right should to raise its right hand perfectly upright, what would it look like?
To further illustrate the idea, if you visit this link:
Motion capture visualisation with Blender
The code he uses to update the 3d model in blender is actually quite small and simple:
import bge
import math
from math import *
import mathutils
import time
import sys
#sys.path.append("/usr/lib/python3/dist-packages")
import serial
import glob
port=''.join(glob.glob("/dev/ttyACM*"))
#port=''.join(glob.glob("/dev/ttyUSB*"))
#port=''.join(glob.glob("/dev/rfcomm"))
ser = serial.Serial(port,115200)
print("connected to: " + ser.portstr)
#Connect the suit first and after a ~second launch the script
# Get the whole bge scene
scene = bge.logic.getCurrentScene()
# Helper vars for convenience
source = scene.objects
# Get the whole Armature
main_arm = source.get('Armature')
ob = bge.logic.getCurrentController().owner
def updateAngles():
ser.write("a".encode('UTF-8'))
s=ser.readline()[:-3].decode('UTF-8') #delete ";\r\n"
angles=[x.split(',') for x in s.split(';')]
for i in range(len(angles)):
angles[i] = [float(x) for x in angles[i]]
trunk = mathutils.Quaternion((angles[4][0],angles[4][1],angles[4][2],angles[4][3]))
correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
trunk_out = correction*trunk
upperLegR = mathutils.Quaternion((angles[5][0],angles[5][1],angles[5][2],angles[5][3]))
correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
upperLegR_out = correction*upperLegR
lowerLegR = mathutils.Quaternion((angles[6][0],angles[6][1],angles[6][2],angles[6][3]))
correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
#correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
lowerLegR_out = correction*lowerLegR
upperLegL = mathutils.Quaternion((angles[7][0],angles[7][1],angles[7][2],angles[7][3]))
correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
upperLegL_out = correction*upperLegL
lowerLegL = mathutils.Quaternion((angles[8][0],angles[8][1],angles[8][2],angles[8][3]))
correction = mathutils.Quaternion((1.0, 0.0, 0.0), math.radians(90.0))
lowerLegL_out = correction*lowerLegL
ob.channels['armR'].rotation_quaternion = mathutils.Vector([angles[0][0],angles[0][1],angles[0][2],angles[0][3]])
ob.channels['forearmR'].rotation_quaternion = mathutils.Vector([angles[1][0],angles[1][1],angles[1][2],angles[1][3]])
ob.channels['armL'].rotation_quaternion = mathutils.Vector([angles[2][0],angles[2][1],angles[2][2],angles[2][3]])
ob.channels['forearmL'].rotation_quaternion = mathutils.Vector([angles[3][0],angles[3][1],angles[3][2],angles[3][3]])
ob.channels['trunk'].rotation_quaternion = trunk_out
ob.channels['upperLegR'].rotation_quaternion = upperLegR_out
ob.channels['lowerLegR'].rotation_quaternion = lowerLegR_out
ob.channels['upperLegL'].rotation_quaternion = upperLegL_out
ob.channels['lowerLegL'].rotation_quaternion = lowerLegL_out
ob.update()
time.sleep(0.001)
Is there an equivalent way in Unity3d, to access a particular bone in a 3d model, and set its rotation ?
See this video:
https://www.youtube.com/watch?time_continue=26&v=JddtxynTgLk
As you can see, he is reading the values from the sensor, and updating the 3d model in blender in real time. I've done the sensor part, but am just getting started with Unity, and was wondering how to access individual bones and set them similar to the above code.
EDIT: In Godot engine, they seem to be able to do it very simply, see the link below:
https://docs.godotengine.org/en/3.0/tutorials/3d/working_with_3d_skeletons.html
extends Spatial
var skel
var id
func _ready():
skel = get_node("skel")
id = skel.find_bone("upperarm")
print("bone id:", id)
var parent = skel.get_bone_parent(id)
print("bone parent id:", id)
var t = skel.get_bone_pose(id)
print("bone transform: ", t)
set_process(true)
func _process(delta):
var t = skel.get_bone_pose(id)
t = t.rotated(Vector3(0.0, 1.0, 0.0), 0.1 * delta)
skel.set_bone_pose(id, t)
As you can see, in the Godot example also, once they get the bones id, they simply need to set the angle.

Related

render of low poly model tris showing very hard/marked in three.js compared to sketchfab/unit3d/Iray

I've edited this post with a clean edge flow model and maps you can access if that helps in getting feedback. I can replicate the hard marked edges issue for this case too:
I'm finding the rendering result in three.js shows very hard marked polygons of the low poly object, I'm comparing this to sketchfab , unity3d and Iray render results.
Here's a snapshot of the edge flow shown in maya : https://drive.google.com/open?id=1qNA4VoZf-rSyq3_MQdeZqdFC6BxsE3un
Here's what the model looks in maya's view panel (not subdivided): https://drive.google.com/open?id=1US-fv5-v2ygReqjRPgcsQSusrAXTxVG5
Here's a snapshot of the three.js render (marked in red box more noticeable)
https://drive.google.com/open?id=1K3CIBLvA7skVUPWL0qInLcFrK74DtriK
here sketchfabs without shadows/post-processing filters
https://drive.google.com/open?id=1rozZyBSU1HwPPk4EnKFyc7SVvFNXQBwz
here Iray render in substance painter:
https://drive.google.com/open?id=1cXJzw780-kWH0nANy5ekM0HjRKAdaVQ2
Here's Unity render: https://drive.google.com/open?id=1lLFLd8UT48OSvxJcp7arwygZZISsaHkS
Here is the fbx if you'd need to inspect mesh / edge flow: https://drive.google.com/open?id=1BwljZNKL3dWJSSca6WYlqSK7os1Hp4pT
I'm also adding the normal map as I thought the problem may relate to my three.js setup for this(?): https://drive.google.com/open?id=149r3n9JGnb9xEJkf9Eh7ELK2bM83bJX_
albedo map: https://drive.google.com/open?id=1rGgDUOKbbeE6mrAlTG_6C7b8LgqQ1DF0
I'm reusing envmap hdr example and hdr setting.
Can someone please share some thoughts on what I can try differently?
Thank you for your help, Sergio.
I tried the following:
I softened edges in maya.
I also tried the lines below separately and combined but there was not result.
//vaseMesh.geometry.mergeVertices(); and //vaseMesh.geometry.computeVertexNormals();
normalScale appears to be best at material.normalScale.x = -1;
I also tried but had same result without hdr or tonemapping settings as per displacement three.js example https://threejs.org/examples/?q=displ#webgl_materials_displacementmap
renderer = new THREE.WebGLRenderer();
renderer.toneMapping = THREE.LinearToneMapping;
//load vase material textures once loaded
manager.onLoad=function () {
material = new THREE.MeshStandardMaterial( {
color: 0xffffff,
roughness: params.roughness,
metalness: params.metalness,
map: albedoM,
normalMap: normalMap,
normalScale: new THREE.Vector2( 1, -1 ),
aoMap: aoMap,
aoMapIntensity: 1,
flatShading: true,
side: THREE.DoubleSide
} );
var myObjectLoader = new THREE.FBXLoader( );
myObjectLoader.load( "Piece1.fbx", function ( group ) {
console.log("On object loading");
var geometry = group.children[ 0 ].geometry;
geometry.attributes.uv2 = geometry.attributes.uv;
geometry.center();
vaseMesh = new THREE.Mesh( geometry, material );
vaseMesh.material=material;
//vaseMesh.geometry.mergeVertices();
//vaseMesh.geometry.computeVertexNormals();
material.normalScale.x = -1;
scene.add( vaseMesh );
console.log("Finished adding to scene");
vaseMesh.position.set(0,0,0);
animate();
} );
}
var textureLoader = new THREE.TextureLoader(manager);
var albedoM = textureLoader.load( "vaseTextures/albedo.png");
var normalMap = textureLoader.load( "vaseTextures/normal.png");
var aoMap = textureLoader.load( "vaseTextures/ao.png");
Giving credit to #Mugen87 for the answer, removing the setting flatShading to true did it!
https://discourse.threejs.org/t/render-of-low-poly-model-tris-showing-very-hard-marked-in-three-js-compared-to-sketchfab-unit3d-iray/6829/2?u=mugen87
Cheers, Sergio

How to convert lat/lon to correct pixel location in GridLayer Tile

I'm playing with creating a Konva-based GridLayer for Leaflet (basically an abstraction around canvas elements to try and render tens of thousands of features efficiently). I have some code that seems to work to some degree (the lines in my sample data seem to line up with what I would expect), but I am getting strange behavior. Specifically, features will seem to visibly "teleport" or disappear completely. Additionally, it is not uncommon to see breaks in lines at the edges of the tiles. I suspect this means I'm calculating the pixel location within each tile incorrectly (although it's certainly possible something else is wrong). I am basically identifying the pixel location of the tile (x, y in renderStage()), and am translating the map pixel position by that many pixels (pt.x and pt.y, generated by projecting the lat/lon). This is intended to create an array of [x1, y1, x2, y2, ...], which can be rendered to the individual tile. Everything is expected to be in EPSG:4326.
Does anyone know how to properly project lat/lon to pixel coordinates within individual tiles of a GridLayer? There are plenty of examples for doing it for the entire map, but this doesn't seem to translate cleanly into how to find those same pixel locations in tiles offset from the upper left of the map.
import { GridLayer, withLeaflet } from "react-leaflet";
import { GridLayer as LeafletGridLayer } from "leaflet";
import { Stage, Line, FastLayer } from "konva";
import * as Util from 'leaflet/src/core/Util';
import _ from "lodash";
export const CollectionLayer = LeafletGridLayer.extend({
options: {
tileSize: 256
},
initialize: function(collection, props) {
Util.setOptions(this, props)
this.collection = collection;
this.stages = new Map();
this.shapes = {};
this.cached = {};
this.on('tileunload', (e) => {
const stage = this.stages[e.coords]
if (stage) {
this.stages.delete(e.coords)
stage.destroy()
}
})
},
renderStage: function(stage, coords, tileBounds) {
const x = coords.x * this._tileSize.x
const y = coords.y * this._tileSize.y
const z = coords.z;
const layer = stage.getLayers()[0]
if (!layer || !tileBounds) return;
_.each(this.collection.data, (entity, id) => {
if (entity.bounds && tileBounds.intersects(entity.bounds)) {
let shape = this.shapes[id]
if (!shape) {
shape = new Line()
shape.shadowForStrokeEnabled(false)
this.shapes[id] = shape
}
layer.add(shape);
const points = entity.position.reduce((pts, p) => {
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom)
pts.push(pt.x - x);
pts.push(pt.y - y);
return pts
}, [])
shape.points(points);
shape.stroke('red');
shape.strokeWidth(2);
this.shapes[id] = shape
}
})
layer.batchDraw()
},
createTile: function(coords) {
const tile = document.createElement("div");
const tileSize = this.getTileSize();
const stage = new Stage({
container: tile,
width: tileSize.x,
height: tileSize.y
});
const bounds = this._tileCoordsToBounds(coords);
const layer = new FastLayer();
stage.add(layer);
this.stages[coords] = stage
this.renderStage(stage, coords, bounds);
return tile;
}
});
class ReactCollectionLayer extends GridLayer {
createLeafletElement(props) {
console.log("PROPS", props);
return new CollectionLayer(props.collection.data, this.getOptions(props));
}
updateLeafletElement(fromProps, toProps) {
super.updateLeafletElement(fromProps, toProps);
if (this.leafletElement.collection !== toProps.collection) {
this.leafletElement.collection = toProps.collection
this.leafletElement.redraw();
}
}
}
export default withLeaflet(ReactCollectionLayer);
Everything is expected to be in EPSG:4326.
No.
Once you are dealing with raster data (image tiles), everything is expected to be either in the map's display CRS, which is (by default) EPSG:3857, or in pixels relative to the CRS origin. These concepts are explained a bit more in-depth in one of Leaflet's tutorials.
In fact, you seem to be working in pixels here, at least for your points:
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom)
However, your calculation of the pixel offset for each tile is too naïve:
const x = coords.x * this._tileSize.x
const y = coords.y * this._tileSize.y
That should instead rely on the private method _getTiledPixelBounds of L.GridLayer, e.g.:
const tilePixelBounds = this._getTiledPixelBounds();
const x = tilePixelBounds.min.x;
const y = tilePixelBounds.min.y;
And use these bounds to add some sanity checks while looping through the points:
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom);
if (!tilePixelBounds.contains(pt)) { console.error(....); }
On the other hand:
[...] an abstraction around canvas elements to try and render tens of thousands of features efficiently
I don't think using Konva to actually draw items on a <canvas> is going to improve the performance - the methods are just the same used by Leaflet (and, if we're talking about tiling vector data, the same used by Leaflet.VectorGrid ). Ten thousand calls to canvas draw functions are going to take the same time no matter what the library on top. If you have time to consider other alternatives, Leaflet.GLMarkers and its WebGL rendering might offer better performance at the price of less compatibility and higher integration costs.

cocos2dx: Sprite3D rotating, culling error

Hi I'm trying to have 2 sprites with different z in 3d world and a camera that rotates around the center of the screen and points at the center of the screen.
Even if the sprites has different z (and zorder, I don't know if this is necessary) the sprites are always visualized while I'm expecting to have the second sprite hided from the other...
This is helloworld layer init
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
this->setCameraMask((unsigned short)CameraFlag::USER2, true);
camera = Camera::createPerspective(60, (float)visibleSize.width/visibleSize.height, 1.0, 1000);
camera->setCameraFlag(CameraFlag::USER2);
camera->setPosition3D(spritePos + Vec3(-200,0,800));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
this->addChild(camera);
this->scheduleUpdate();
angle=0;
and this is update:
void TestScene::update(float dt)
{
angle+=0.1;
Size visibleSize = Director::getInstance()->getVisibleSize();
Vec2 origin = Director::getInstance()->getVisibleOrigin();
Vec3 spritePos=Vec3(visibleSize.width/2,visibleSize.height/2,0);
camera->setPosition3D(Vec3(visibleSize.width/2,visibleSize.height/2,0) + Vec3(800*cos(angle),0,800*sin(angle)));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
}
I have tryed something simplier:
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
even with sp3d->runAction(RotateTo::create(20,vec3(0,3000,0))) same error.
Is it a cocos2dx bug?
the sprite with z=10 disappear before it is covered by the other sprite...
remain hidden for a while, and when it should be hidden completely reappear!!!
Do I have forgot something?
thanks
Maybe you should check this.
_camControlNode = Node::create();
_camControlNode->setNormalizedPosition(Vec2(.5,.5));
addChild(_camControlNode);
_camNode = Node::create();
_camNode->setPositionZ(Camera::getDefaultCamera()->getPosition3D().z);
_camControlNode->addChild(_camNode);
auto sp3d = Sprite3D::create();
sp3d->setPosition(s.width/2, s.height/2);
addChild(sp3d);
auto lship = Label::create();
lship->setString("Ship");
lship->setPosition(0, 20);
sp3d->addChild(lship);
and
_lis->onTouchMoved = [this](Touch* t, Event* e) {
float dx = t->getDelta().x;
Vec3 rot = _camControlNode->getRotation3D();
rot.y += dx;
_camControlNode->setRotation3D(rot);
Vec3 worldPos;
_camNode->getNodeToWorldTransform().getTranslation(&worldPos);
Camera::getDefaultCamera()->setPosition3D(worldPos);
Camera::getDefaultCamera()->lookAt(_camControlNode->getPosition3D());
};

Using leapmotion to control Unity3d interface

I understand that I can use leapmotion in game with Unity3D.
What I can't see any information on, is if I can use it to actual interact with assets, models etc as I build the game. For example revolving a game object around the x axis or zooming in or out of the view.
Is this possible?
Yes, it is possible, but requires some scripts that nobody has written yet (ASFAIK). Here is a VERY rough example that I worked up today since I've been curious about this question, too.
All it does is move, scale, and rotate a selected game object -- it doesn't try to do this in a good way -- it is a proof of concept only. To make it work you would have to do a sensible conversion of Leap coordinates and rotations to Unity values. To try it, put this script in a folder called "Editor", select a game object in the scene view and hold a key down while moving your hand above your Leap. As I said, none of these movements really work to edit an object, but you can see that it is possible with some sensible logic.
#CustomEditor (Transform)
class RotationHandleJS extends Editor {
var controller = new Leap.Controller();
var position;
var localScale;
var localRotation;
var active = false;
function OnSceneGUI () {
e = Event.current;
switch (e.type) {
case EventType.KeyDown:
position = target.transform.position;
localScale = target.transform.localScale;
localRotation = target.transform.localRotation;
active = true;
Debug.Log("editing");
break;
case EventType.KeyUp:
active = false;
target.transform.position = position;
target.transform.localScale = localScale;
EditorUtility.SetDirty (target);
break;
}
if(active){
frame = controller.Frame();
ten = controller.Frame(10);
scale = frame.ScaleFactor(ten);
translate = frame.Translation(ten);
target.transform.localScale = localScale + new Vector3(scale, scale, scale);
target.transform.position = position + new Vector3(translate.x, translate.y, translate.z);
leapRot = frame.RotationMatrix(ten);
quats = convertRotation(leapRot);
target.transform.localRotation = quats;
}
}
var LEAP_UP = new Leap.Vector(0, 1, 0);
var LEAP_FORWARD = new Leap.Vector(0, 0, -1);
var LEAP_ORIGIN = new Leap.Vector(0, 0, 0);
function convertRotation(matrix:Leap.Matrix) {
var up = matrix.TransformDirection(LEAP_UP);
var forward = matrix.TransformDirection(LEAP_FORWARD);
return Quaternion.LookRotation(new Vector3(forward.x, forward.y,forward.z), new Vector3(up.x, up.y, up.z));
}
}

three.js merging geometry with ShaderMaterials

I have a project built on a tileset, which I currently map to CubeGeometries via a number of ShaderMaterials.
When the cubes are rendered, there is bleeding and flickering around the edges of the cubes. Also, it seems to be an awfully bad way to do it, performance-wise.
So I looked up THREE.GeometryUtils.merge that apparently merges my cubes to one geometry, vertices and all.
Is it possible to make the merged mesh keep the materials I used on each of the cubes?
Is there a better way to accomplish what I'm trying to do?
Edit:
This is an example of what is not working.
http://jsfiddle.net/CpQ77/3/
var shaderMat1 = new THREE.ShaderMaterial({
fragmentShader: document.getElementById("red-fragment").innerText,
vertexShader: document.getElementById("vertex").innerText
});
var shaderMat2 = new THREE.ShaderMaterial({
fragmentShader: document.getElementById("blue-fragment").innerText,
vertexShader: document.getElementById("vertex").innerText
});
var cube1 = new THREE.Mesh(new THREE.CubeGeometry(64, 64, 64), new THREE.MeshFaceMaterial([shaderMat1, shaderMat1, shaderMat1, shaderMat1, shaderMat1, shaderMat1]));
cube1.position.x = 0;
cube1.position.y = 300;
var cube2 = new THREE.Mesh(new THREE.CubeGeometry(64, 64, 64), new THREE.MeshFaceMaterial([shaderMat2, shaderMat2, shaderMat2, shaderMat2, shaderMat2, shaderMat2]));
cube2.position.x = 64;
cube2.position.y = 300;
var geo = new THREE.Geometry();
THREE.GeometryUtils.merge(geo, cube1);
THREE.GeometryUtils.merge(geo, cube2);
var mergedMesh = new THREE.Mesh(geo, new THREE.MeshFaceMaterial());
scene.add(mergedMesh);
It gives an error saying, "Uncaught TypeError: Cannot read property 'map' of undefined", when trying to use the MeshFaceMaterial as used in a couple of places around the web.
I can't figure out what I'm missing though.
Edit2:
One workaround I found was to loop through all the faces of the new geometry, and applying a materialIndex to it before calling geometry.mergeVertices().
Thanks for this post, the comments were helpful in finding a solution. Instead of supplying the materials array to the Geometry, you should supply it as the only argument to MeshFaceMaterial.
Example in CoffeeScript:
materials = []
for i in [0...6]
texture = window["texture_" + i] # This is a Texture that has already been loaded
materials.push new THREE.MeshBasicMaterial(
color : color
map : texture
)
size = 1
geometry = new THREE.CubeGeometry size, size, size
cube = new THREE.Mesh geometry, new THREE.MeshFaceMaterial materials
cube.position.x = x
cube.position.y = y
cube.position.z = z
scene.add cube
return cube