I need to find a way to have the kinect only recognize objects in a certain Range. The problem is that in our setup there will be viewers around the scene who may disturb the tracking. Therefore I need to set the kinect to a range of a few meters so it won't be disturbed by objects beyond that range. We are using the SimpleOpenNI library for processing.
Is there any possibility to achieve something like that in any way?
Thank you very much in advance.
Matteo
You can get the user's centre of mass(CoM) which retrieves a x,y,z position for a user without skeleton detection:
Based on the z position you should be able to use a basic if statement for your range/threshold.
import SimpleOpenNI.*;
SimpleOpenNI context;//OpenNI context
PVector pos = new PVector();//this will store the position of the user
ArrayList<Integer> users = new ArrayList<Integer>();//this will keep track of the most recent user added
float minZ = 1000;
float maxZ = 1700;
void setup(){
size(640,480);
context = new SimpleOpenNI(this);//initialize
context.enableScene();//enable features we want to use
context.enableUser(SimpleOpenNI.SKEL_PROFILE_NONE);//enable user events, but no skeleton tracking, needed for the CoM functionality
}
void draw(){
context.update();//update openni
image(context.sceneImage(),0,0);
if(users.size() > 0){//if we have at least a user
for(int user : users){//loop through each one and process
context.getCoM(user,pos);//store that user's position
println("user " + user + " is at: " + pos);//print it in the console
if(pos.z > minZ && pos.z < maxZ){//if the user is within a certain range
//do something cool
}
}
}
}
//OpenNI basic user events
void onNewUser(int userId){
println("detected" + userId);
users.add(userId);
}
void onLostUser(int userId){
println("lost: " + userId);
users.remove(userId);
}
You can find more explanation and hopefully useful tips in these workshop notes I posted.
Related
hey guys , So as you can see i made a robot arm grab a slingshot's objectholder with a ball in it. My arm pulls it any direction I want it but I wanted the user to know which box is going to be shot at.
If you're applying an impulse force (or velocity) to your ball and there is gravity in your world, your item will follow the Projectile motion;
Here you can find details about it:
https://en.wikipedia.org/wiki/Projectile_motion
There are basically two main options
calculating it yourself
This is probably way better for performance especially if you want a really simple trajectory preview without accounting for any collision etc
refer to linked article. But it basically comes down to
and would need slightly rework from 2D to 3D physics, should be trivial though since the important part about the Y axis basically stays the same.
You would
Call this simulation with according supposed shoot direction and velocity
Visualize the tracked positions e.g. in a LineRenderer
Physics.Simulate
This allows you to run physics updates manually all within a single frame and actually let the Physics engine handle it all for you
This costs of course a lot of performance but you get all collisions etc accounted for automatically without getting a headache
You would
Make a snapshot of all Rigid bodies in your scene - in order to reset after the simulation
Simulate the desired amount of physics steps (XY seconds ahead) while keeping track of the simulated data
reset everything to the state tracked in step 1
use the simulated data from step 2 to visualize e.g. with a LineRenderer
This might look somewhat like e.g.
public class Prediction : MonoBehaviour
{
public LineRenderer line;
public Rigidbody tracked;
private Rigidbody[] allRigidbodies;
private void Awake()
{
allRigidbodies = FindObjectsOfType<Rigidbody>();
}
private void LateUpdate()
{
// Wherever you would get this from
Vector3 wouldApplyForce;
// Step 1 - snapshot
// For simplicity reasons for now just the positions
// using some Linq magic
var originalPositions = allRigidbodies.ToDictionary(item => item, item => item.position);
// Step 2 - Simulate e.g. 2 seconds ahead
var trackedPositions = new Vector3 [(int) (2 / Time.fixedDeltaTime)];
Physics.autoSimulation = false;
tracked.AddForce(wouldApplyForce);
for(var i = 0; i < trackedPositions.Length; i++)
{
Physics.Simulate(Time.fixedDeltaTime);
trackedPositions[i] = tracked.position;
}
// Step 3 - reset
foreach (var kvp in originalPositions)
{
kvp.Key.position = kvp.Value;
}
Physics.autoSimulate = true;
// Step 4 - Visualize
line.positionCount = trackedPositions.Length;
line.SetPositions(trackedPositions);
}
}
Of course we won't talk about performance here ^^
I am able to avoid a collision between my player and my entire plateform with the use of contactFilter2D.SetLayerMask() + rigidBody2D.Cast(Vector2, contactFilter, ...);
But I don't find a way to avoid the collision only if my player try to acces to the plateform from below it (with a vertical jump).
I'm pretty sure I should use the contactFilter2D.setNormalAngle() (after specify the minAngle and maxAngle) but no matter the size of my angles, I can't pass threw it.
This is how I initialize my contactFilter2D.
protected ContactFilter2D cf;
void Start () {
cf.useTriggers = false;
cf.minNormalAngle = 0;
cf.maxNormalAngle = 180;
cf.SetNormalAngle(cf.minNormalAngle, cf.maxNormalAngle);
cf.useNormalAngle = true;
}
void Update () {
}
I use it with
count = rb.Cast(move, contactFilter, hitBuffer, distance + shellRadius);
Any ideas ? If you want more code, tell me. But I don't think it will be usefull to understand the matter.
unity actualy has a ready made component for this: it is a physics component called "Platform Effector 2D" if you drag and drop it on your platform it will immediately work the way you want, and it has adjustable settings for tweaking the parameters. hope this helps!
I am trying to build a application which includes using leap motion interaction engine to move the object in unity 3d. However, i also need to find out which fingers are touching the object when interacting with object in unity. Is there anyway i can do that? Thanks in advance!!
Strictly speaking, the Grasping logic of the Interaction Engine has to check this very thing in order to initiate or release grasps, but it doesn't have a friendly API for accessing this information.
A more convenient way to express this, even though it's not the most efficient way, would be to detect when a hand is intersecting with the interaction object and checking the distance between each fingertip and the object.
All InteractionControllers that are intersecting with a given InteractionBehaviour can be accessed via its contactingControllers property; using the Query library included in Leap tools for Unity, you can convert a bunch of Interaction Controller references to a bunch of Leap Hands without too much effort, and then perform the check:
using Leap.Unity;
using Leap.Unity.Interaction;
using Leap.Unity.Query;
using UnityEngine;
public class QueryFingertips : MonoBehaviour {
public InteractionBehaviour intObj;
private Collider[] _collidersBuffer = new Collider[16];
private float _fingertipRadius = 0.01f; // 1 cm
void FixedUpdate() {
foreach (var contactingHand in intObj.contactingControllers
.Query()
.Select(controller => controller.intHand)
.Where(intHand => intHand != null)
.Select(intHand => intHand.leapHand)) {
foreach (var finger in contactingHand.Fingers) {
var fingertipPosition = finger.TipPosition.ToVector3();
// If the distance from the fingertip and the object is less
// than the 'fingertip radius', the fingertip is touching the object.
if (intObj.GetHoverDistance(fingertipPosition) < _fingertipRadius) {
Debug.Log("Found collision for fingertip: " + finger.Type);
}
}
}
}
}
I'm just started to learn unity and game dev. I have question
I want to create Map. This map will get array from server with "starts" and probably img for current map
User will travel from star to star.
Questions are:
1) Let say I return 11 stars and img for curernt map:
1)- How I can show where are user now. On server I can know that he travel from 2 to 3 star and I know that user will there(star #3) in 4min - how I can put "user point" between 2-3 start and each second move user point closer to 3star?
what is best practies for create dynamic map -- I mean - when user will finish his play on 11 start - i want to return to user new map with new dynamic start and he will start new map travelling
thnak you and sorry for English grammar.
how I can put "user point" between 2-3 start and each second move user
point closer to 3star?
Use Vector2.Lerp to do that. Pass in the first location and the second location, then pass in 0.5(half) to the time parameter It should return the mid point between the first and the second location.
A helper function to do this:
Vector2 getMidPoint(Vector2 userPointA, Vector2 userPointB)
{
return Vector2.Lerp(userPointA, userPointB, 0.5f);
}
I can know that he travel from 2 to 3 star and I know that user will
there(star #3) in 4min
IEnumerator moveWithinTime(GameObject playerToMove, Vector2 fromPointA, Vector2 toPointB, float byTime)
{
float counter = 0;
while (counter < byTime)
{
counter += Time.deltaTime;
playerToMove.transform.position = Vector2.Lerp(fromPointA, toPointB, counter / byTime);
yield return null;
}
}
And to call it, StartCoroutine(moveWithinTime(gameObject, gameObject.transform.position, new Vector2(10f, 10f), 4));
For my project i have to capture the legs of the body tracking, while a person is sitting. Especially the knees and feet. I’ve made it so far to draw my choosen joints in C#.
BUT if I put the Kinect under a table and the person is sitting in front of the table, the choosen joints start to flickering. I found out that the Kinect always tries to capture the whole body of the person. But the table prevents the Kinect of capturing the torso and the rest of the body. So my choosen joints for the legs can not be tracked correct.
My question is, how can I modify the BodyFrameReader to read or catupture only the legs, instead of the whole body? I am not talking about to draw my choosen joints. The BodyFrameReader of the BodyTracker shall only capture my joints. I use C#.
Can someone help and give me a code example? Thanks a lot
Here my code snippet
kinectSensor = KinectSensor.GetDefault();
bodyFrameReader = kinectSensor.BodyFrameSource.OpenReader();
kinectSensor.Open();
if (bodyFrameReader != null) {
bodyFrameReader.FrameArrived += Reader_FrameArrived;
private void Reader_FrameArrived(object sender, BodyFrameArrivedEventArgs e) {
bool dataReceived = false;
using (BodyFrame bodyFrame = e.FrameReference.AcquireFrame()) {
if (bodyFrame != null) {
if (bodies == null) {
bodies = new Body[bodyFrame.BodyCount];
}